Deepfake detection through motion magnification inspired feature manipulation

buir.advisorDibeklioğlu, Hamdi
dc.contributor.authorMirzayev, Aydamir
dc.date.accessioned2022-09-19T13:15:38Z
dc.date.available2022-09-19T13:15:38Z
dc.date.copyright2022-09
dc.date.issued2022-09
dc.date.submitted2022-09-19
dc.descriptionCataloged from PDF version of article.en_US
dc.descriptionThesis (Master's): Bilkent University, Department of Computer Engineering, İhsan Doğramacı Bilkent University, 2022.en_US
dc.descriptionIncludes bibliographical references (leaves 45-52).en_US
dc.description.abstractSynthetically generated media content poses a significant threat to information security in the online domain. Manipulated videos and images of celebrities, politicians, and ordinary citizens, if aimed at misrepresentation, and defamation can cause significant damage to one's reputation. Early detection of such content is crucial to timely alleviation of further spread of questionable information. In the past years, a significant number of deepfake detection frameworks have proposed to utilize motion magnification as a preprocessing step aimed at revealing transitional inconsistencies relevant to the prediction outcome. However, such an approach is sub-optimal since often utilized motion manipulation approaches are optimized for a limited set of controlled motions and display significant visual artifacts when used outside of their domain. To this end, rather than apply motion magnification as a separate processing step, we propose to test trainable motion magnification-inspired feature manipulation units as an addition to a convolutional-LSTM classification network. In our approach, we aim to take the first step at understanding the use of magnification-like architectures in the task of video classification rather than aim at full integration. We test out results on the Celeb-DF dataset which is composed of more than five thousand synthetically generated videos generated using DeepFakes fake generation method. We treat manipulation unit as another network layer and test the performance of the network both with and without it. To ensure the consistency of our results we perform multiple experiments with the same configurations and report the average accuracy. In our experiments we observe an average 3% jump accuracy when the feature manipulation unit is incorporated into the network.en_US
dc.description.provenanceSubmitted by Betül Özen (ozen@bilkent.edu.tr) on 2022-09-19T13:15:38Z No. of bitstreams: 1 B161317.pdf: 11393515 bytes, checksum: 62b9b2c9621bc96ed5d5ad3a584c0ff5 (MD5)en
dc.description.provenanceMade available in DSpace on 2022-09-19T13:15:38Z (GMT). No. of bitstreams: 1 B161317.pdf: 11393515 bytes, checksum: 62b9b2c9621bc96ed5d5ad3a584c0ff5 (MD5) Previous issue date: 2022-09en
dc.description.statementofresponsibilityby Aydamir Mirzayeven_US
dc.embargo.release2023-03-19
dc.format.extentix, 52 leaves : illustrations ; 30 cm.en_US
dc.identifier.itemidB161317
dc.identifier.urihttp://hdl.handle.net/11693/110538
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectDeepfake detectionen_US
dc.subjectComputer visionen_US
dc.subjectFacial expressionen_US
dc.subjectMotion magnificationen_US
dc.subjectVideo classificationen_US
dc.subjectDeep learningen_US
dc.subjectSpatio-temporal analysisen_US
dc.titleDeepfake detection through motion magnification inspired feature manipulationen_US
dc.title.alternativeHareket büyütmesinden esinlenen öznitelik manipülasyonu ile derin sahtelerin tespitien_US
dc.typeThesis
thesis.degree.disciplineComputer Engineering
thesis.degree.grantorBilkent University
thesis.degree.levelMaster's
thesis.degree.nameMS (Master of Science)

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
B161317.pdf
Size:
10.87 MB
Format:
Adobe Portable Document Format
Description:
Full printable version

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.69 KB
Format:
Item-specific license agreed upon to submission
Description: