Comparing the performance of humans and 3D-convolutional neural networks in material perception using dynamic cues

buir.advisorBoyacı, Hüseyin
dc.contributor.authorMehrzadfar, Hossein
dc.date.accessioned2019-08-20T12:54:08Z
dc.date.available2019-08-20T12:54:08Z
dc.date.copyright2019-07
dc.date.issued2019-07
dc.date.submitted2019-08-19
dc.descriptionCataloged from PDF version of article.en_US
dc.descriptionThesis (M.S.): Bilkent University, Department of Neuroscience, İhsan Doğramacı Bilkent University, 2019.en_US
dc.descriptionIncludes bibliographical references (leaves 66-70).en_US
dc.description.abstractThere are numerous studies on material perception in humans. Similarly, there are various deep neural network models that are trained to perform different visual tasks such as object recognition. However, the intersection of material perception in humans and deep neural network models has not been investigated to our knowledge. Especially, the importance of the ability of deep neural networks in categorizing materials and also comparing human performance with the performance of deep convolutional neural networks has not been appreciated enough. Here we have built, trained and tested a 3D-convolutional neural network model that is able to categorize the animations of simulated materials. We have compared the performance of the deep neural network with that of humans and concluded that the conventional training of deep neural networks is not necessarily giving the optimal state of the network to be compared to the performance of the humans. In the material categorization task, the similarity between the performance of humans and deep neural networks increases and reaches the maximum similarity and then decreases as we train the network further. Also, by training the 3D-CNN on regular, temporally consistent animations and also training it on the temporally inconsistent animations and comparing the results we found out that the 3D-CNN model can use spatial information in order to categorize the material animations. In other words, we found out that the temporal, and consistent motion information is not necessary for the deep neural networks in order to categorize the material animations.en_US
dc.description.provenanceSubmitted by Betül Özen (ozen@bilkent.edu.tr) on 2019-08-20T12:54:08Z No. of bitstreams: 1 Master's Thesis - Hossein Mehrzadfar (Referans No 10278084).pdf: 6479143 bytes, checksum: 821f516f6425b2cae76b476bb2275fd2 (MD5)en
dc.description.provenanceMade available in DSpace on 2019-08-20T12:54:08Z (GMT). No. of bitstreams: 1 Master's Thesis - Hossein Mehrzadfar (Referans No 10278084).pdf: 6479143 bytes, checksum: 821f516f6425b2cae76b476bb2275fd2 (MD5) Previous issue date: 2019-08en
dc.description.statementofresponsibilityby Hossein Mehrzadfaren_US
dc.embargo.release2020-02-15
dc.format.extentxii, 70 leaves : illustrations, charts ; 30 cm.en_US
dc.identifier.itemidB107954
dc.identifier.urihttp://hdl.handle.net/11693/52353
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectDeep neural networksen_US
dc.subject3D-convolutional neural networksen_US
dc.subjectMaterial perceptionen_US
dc.subjectMaterial animationsen_US
dc.subjectMotion perceptionen_US
dc.titleComparing the performance of humans and 3D-convolutional neural networks in material perception using dynamic cuesen_US
dc.title.alternativeİnsanların performansını karşılaştırmak ve 3d-konvoleksiyonel nöral ağlarda dinamik pişirme kullanılan malzeme anlayışıen_US
dc.typeThesisen_US
thesis.degree.disciplineNeuroscience
thesis.degree.grantorBilkent University
thesis.degree.levelMaster's
thesis.degree.nameMS (Master of Science)

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Master's Thesis - Hossein Mehrzadfar (Referans No 10278084).pdf
Size:
6.18 MB
Format:
Adobe Portable Document Format
Description:
Full printable version

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: