Browsing by Subject "Generative modeling"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Age and gender normalization in kinship verification(2021-09) Çalıkkasap, OğuzhanKinship veri cation from facial images using deep learning is an interesting problem that is unsolved and gains growing attention of the research community. However, the most recent kinship veri cation systems su er from age- and genderrelated facial attributes that cause problems in kinship veri cation between subjects of di erent age and gender. In this study, we propose various methods to reduce the negative e ect of the age- and gender-related facial attributes in kinship veri cation to achieve a more robust veri cation model. The proposed approach utilizes the comprehensive modeling capabilities of the recent generative adversarial network architectures to model the age and gender of subjects and reduce their e ect in kinship veri cation, if not remove entirely. Furthermore, we conduct a thorough analysis over individual and combined e ects of age and gender normalization, performed in both image and latent space of the generative models. Lastly, we investigate the impact of additional emphasis on the facial identity information during the normalization process. Taking one of the most recent kinship veri cation models as our baseline, we show that gender normalization has reduced the veri cation performance gap between subject pairs with the same and di erent gender, up to 6%. Furthermore, joint normalization of age and gender improves the kinship veri cation accuracy up to 5% and 10% on two di erent in-the-wild kinship datasets. Therefore, this thesis proposes generic approaches to improve the reliability and robustness of kinship veri cation by normalizing the age and gender attributes without making changes in the core architecture of the employed kinship veri cation system.Item Open Access Denoising diffusion adversarial models for unconditional medical image generation(IEEE - Institute of Electrical and Electronics Engineers, 2023-08-28) Dalmaz, Onat; Sağlam, Baturay; Elmas, Gökberk; Mirza, Muhammad Usama; Çukur, TolgaUnconditional medical image synthesis is the task of generating realistic and diverse medical images from random noise without any prior information or constraints. Synthesizing realistic medical images can enrich the quality and diversity of medical imaging datasets, which in turn, enhance the performance and generalization of deep learning models for medical imaging. Prevalent approach for synthesizing medical images involves generative adversarial networks (GAN) or denoising diffusion probabilistic models (DDPM). However, GAN models that implicitly learn the image distribution are prone to limited sample fidelity and diversity. On the other hand, diffusion models suffer from slow sampling speed due to small diffusion steps. In this paper, we propose a novel diffusion-based method for unconditional medical image synthesis, Diff-Med-Synth, that generates realistic and diverse medical images from random noise. Diff-Med-Synth combines the advantages of denoising diffusion probabilistic models and GANs to achieve fast and efficient image sampling. We evaluate our method on two multi-contrast MRI datasets and show that it outperforms state-of-the-art methods in terms of quality, diversity, and fidelity of the synthesized images.