Browsing by Subject "Deepfake detection"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Deepfake detection through motion magnification inspired feature manipulation(2022-09) Mirzayev, AydamirSynthetically generated media content poses a significant threat to information security in the online domain. Manipulated videos and images of celebrities, politicians, and ordinary citizens, if aimed at misrepresentation, and defamation can cause significant damage to one's reputation. Early detection of such content is crucial to timely alleviation of further spread of questionable information. In the past years, a significant number of deepfake detection frameworks have proposed to utilize motion magnification as a preprocessing step aimed at revealing transitional inconsistencies relevant to the prediction outcome. However, such an approach is sub-optimal since often utilized motion manipulation approaches are optimized for a limited set of controlled motions and display significant visual artifacts when used outside of their domain. To this end, rather than apply motion magnification as a separate processing step, we propose to test trainable motion magnification-inspired feature manipulation units as an addition to a convolutional-LSTM classification network. In our approach, we aim to take the first step at understanding the use of magnification-like architectures in the task of video classification rather than aim at full integration. We test out results on the Celeb-DF dataset which is composed of more than five thousand synthetically generated videos generated using DeepFakes fake generation method. We treat manipulation unit as another network layer and test the performance of the network both with and without it. To ensure the consistency of our results we perform multiple experiments with the same configurations and report the average accuracy. In our experiments we observe an average 3% jump accuracy when the feature manipulation unit is incorporated into the network.Item Open Access Face manipulation detection(2023-09) Nourmohammadi, SepehrAdvancements in deep learning have facilitated the creation of highly realistic counterfeit human faces, ushering in the era of deepfakes. The potential to generate such convincingly authentic fake content prompts concerns due to the potential harm it could inflict on individuals and societies alike. Current studies predominantly focus on binary approaches that differentiate between real and fake images or videos. However, this approach can be time-consuming, requiring a multitude of diverse fake examples for training. Furthermore, unique deepfake content generated using different models may elude detection, making it challenging to apprehend all deepfakes. We propose two potential solutions. First, we suggest a one-class classification method, a purist approach that trains solely on real data and tests on both real and fake data. Second, using a cross-manipulation technique as a non-purist approach, which refers to the application of image manipulations to a use unseen/unknown manipulated samples during the training of the machine learning model. Efficacy in this process can be achieved by using a combination of different models, which enhances the detection of deep fakes. This is done by merging learning-based systems involving an ℓp-norm constraint with adjustable p-norm rules, thereby providing both sparse and non-sparse solutions to enhance discriminatory information between based learners in ensemble learning. Contrary to conventional subject-independent learning methods employed in deep fake detection, we propose a subject-dependent learning approach. Our preliminary findings suggest that this multifaceted approach can effectively detect deepfakes, demonstrating impressive results on the FaceForensics++ dataset as well as on generic one-class classification datasets including the UCI, and Keel datasets in both pure and non-pure approaches.