Face manipulation detection
Date
Authors
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
BUIR Usage Stats
views
downloads
Series
Abstract
Advancements in deep learning have facilitated the creation of highly realistic counterfeit human faces, ushering in the era of deepfakes. The potential to generate such convincingly authentic fake content prompts concerns due to the potential harm it could inflict on individuals and societies alike. Current studies predominantly focus on binary approaches that differentiate between real and fake images or videos. However, this approach can be time-consuming, requiring a multitude of diverse fake examples for training. Furthermore, unique deepfake content generated using different models may elude detection, making it challenging to apprehend all deepfakes. We propose two potential solutions. First, we suggest a one-class classification method, a purist approach that trains solely on real data and tests on both real and fake data. Second, using a cross-manipulation technique as a non-purist approach, which refers to the application of image manipulations to a use unseen/unknown manipulated samples during the training of the machine learning model. Efficacy in this process can be achieved by using a combination of different models, which enhances the detection of deep fakes. This is done by merging learning-based systems involving an ℓp-norm constraint with adjustable p-norm rules, thereby providing both sparse and non-sparse solutions to enhance discriminatory information between based learners in ensemble learning. Contrary to conventional subject-independent learning methods employed in deep fake detection, we propose a subject-dependent learning approach. Our preliminary findings suggest that this multifaceted approach can effectively detect deepfakes, demonstrating impressive results on the FaceForensics++ dataset as well as on generic one-class classification datasets including the UCI, and Keel datasets in both pure and non-pure approaches.