Sivük, Hakan2024-09-182024-09-182024-092024-092024-09-17https://hdl.handle.net/11693/115820Cataloged from PDF version of article.Thesis (Master's): Bilkent University, Department of Computer Engineering, İhsan Doğramacı Bilkent University, 2024.Includes bibliographical references (leaves 38-45).Semantic image editing involves filling in pixels according to a given semantic map, a complex task that demands contextual harmony and precise adherence to the semantic map. Most previous approaches attempt to encode all information from the erased image, but when adding an object like a car, its style cannot be inferred only from the context. Models capable of producing diverse results often struggle with smooth integration between generated and existing parts of the image. Moreover, existing methods lack a mechanism to encode the styles of fully and partially visible objects differently, limiting their effectiveness. In this work, we introduce a framework incorporating a novel mechanism to distinguish between visible and partially visible objects, leading to more consistent style encoding and improved final outputs. Through extensive comparisons with existing conditional image generation and semantic editing methods, our experiments demonstrate that our approach significantly outperforms the state-of-the-art. In addition to improved quantitative results, our method provides greater diversity in outcomes. For code and a demo, please visit our project page at https://github.com/hakansivuk/DivSem.x, 45 leaves : color illustrations, charts ; 30 cm.Englishinfo:eu-repo/semantics/openAccessSemantic image editingConditional image inpaintingConditional image outpaintingGenerative adversarial networksDiverse inpainting and editing with semantic conditioningSemantik koşullama ile çeşitli tamamlama ve düzenlemeThesisB162651