Browsing by Subject "Image inpainting"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Face inpainting with pre-trained image transformers(IEEE, 2022-08-29) Gönç, Kaan; Sağlam, Baturay; Kozat, Süleyman S.; Dibeklioğlu, HamdiImage inpainting is an underdetermined inverse problem that allows various contents to fill in the missing or damaged regions realistically. Convolutional neural networks (CNNs) are commonly used to create aesthetically pleasing content, yet CNNs have restricted perception fields for collecting global characteristics. Transformers enable long-range relationships to be modeled and different content generated with autoregressive modeling of pixel-sequence distributions using image-level attention mechanism. However, the current approaches to inpainting with transformers are limited to task-specific datasets and require larger-scale data. We introduce an approach to image inpainting by leveraging pre-trained vision transformers to remedy this issue. Experiments show that our approach can outperform CNN-based approaches and have a remarkable performance closer to the task-specific transformer methods.Item Open Access Image inpainting with diffusion models and generative adversarial networks(2024-05) Yıldırım, Ahmet BurakWe present two novel approaches to image inpainting, a task that involves erasing unwanted pixels from images and filling them in a semantically consistent and realistic way. The first approach uses natural language input to determine which object to remove from an image. We construct a dataset named GQA-Inpaint for this task and train a diffusion-based inpainting model on it, which can remove objects from images based on text prompts. The second approach tackles the challenging task of inverting erased images into StyleGAN’s latent space for realistic inpainting and editing. For this task, we propose learning an encoder and a mixing network to combine encoded features of erased images with StyleGAN’s mapped features from random samples. To achieve diverse inpainting results for the same erased image, we combine the encoded features and randomly sampled style vectors via the mixing network. We compare our methods with different evaluation metrics that measure the quality of the models and show significant quantitative and qualitative improvements.Item Open Access Partial convolution for padding, inpainting, and image synthesis(IEEE, 2022-09-26) Liu, Guilin; Dündar, Ayşegül; Shih, Kevin J.; Wang, Ting-Chun; Reda, Fitsum A.; Sapra, Karan; Yu, Zhiding; Yang, Xiaodong; Tao, Andrew; Catanzaro, BryanPartial convolution weights convolutions with binary masks and renormalizes on valid pixels. It was originally proposed for image inpainting task because a corrupted image processed by a standard convolutional often leads to artifacts. Therefore, binary masks are constructed that define the valid and corrupted pixels, so that partial convolution results are only calculated based on valid pixels. It has been also used for conditional image synthesis task, so that when a scene is generated, convolution results of an instance depend only on the feature values that belong to the same instance. One of the unexplored applications for partial convolution is padding which is a critical component of modern convolutional networks. Common padding schemes make strong assumptions about how the padded data should be extrapolated. We show that these padding schemes impair model accuracy, whereas partial convolution based padding provides consistent improvements across a range of tasks. In this paper, we review partial convolution applications under one framework. We conduct a comprehensive study of the partial convolution based padding on a variety of computer vision tasks, including image classification, 3D-convolution-based action recognition, and semantic segmentation. Our results suggest that partial convolution-based padding shows promising improvements over strong baselines.