Show simple item record

dc.contributor.advisorÇetin, Ahmet Enis
dc.contributor.authorBadawi, Diaa Hisham Jamil
dc.date.accessioned2018-01-10T13:32:26Z
dc.date.available2018-01-10T13:32:26Z
dc.date.copyright2018-01
dc.date.issued2018-01
dc.date.submitted2018-02-10
dc.identifier.urihttp://hdl.handle.net/11693/35726
dc.descriptionCataloged from PDF version of article.en_US
dc.descriptionThesis (M.S.): Bilkent University, Department of Electrical and Electronics Engineering, İhsan Doğramacı Bilkent University, 2018.en_US
dc.descriptionIncludes bibliographical references (leaves 67-75).en_US
dc.description.abstractDot product-based operations in neural net feedforwarding passes are replaced with an ℓ₁-norm inducing operator, which itself is multiplication-free. The neural net, which is called AddNet, retains attributes of ℓ₁-norm based feature extraction schemes such as resilience against outliers. Furthermore, feedforwarding passes can be realized using fewer multiplication operations, which implies energy efficiency. The ℓ₁-norm inducing operator is differentiable w.r.t its operands almost everywhere. Therefore, it is possible to use it in neural nets that are to be trained through standard backpropagation algorithm. AddNet requires scaling (multiplicative) bias so that cost gradients do not explode during training. We present different choices for multiplicative bias: trainable, directly dependent upon the associated weights, or fixed. We also present a sparse variant of that operator, where partial or full binarization of weights is achievable. We ran our experiments over MNIST and CIFAR-10 datasets. AddNet could achieve results that are 0:1% less accurate than a ordinary CNN. Furthermore, trainable multiplicative bias helps the network to converge fast. In comparison with other binary-weights neural nets, AddNet achieves better results even with full or almost full weight magnitude pruning while keeping the sign information after training. As for experimenting on CIFAR-10, AddNet achieves accuracy 5% less than a ordinary CNN. Nevertheless, AddNet is more rigorous against impulsive noise data corruption and it outperforms the corresponding ordinary CNN in the presence of impulsive noise, even at small levels of noise.en_US
dc.description.statementofresponsibilityby Diaa Hisham Jamil Badawi.en_US
dc.format.extentxiv, 78 leaves : charts (some color) ; 30 cmen_US
dc.language.isoEnglishen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectDeep Learningen_US
dc.subjectConvolutional Neural Networken_US
dc.subjectℓ₁ Normen_US
dc.subjectEnergy Efficiencyen_US
dc.subjectBinary Weightsen_US
dc.subjectImpulsive Noiseen_US
dc.titleCovolutional neural networks based on non-euclidean operatorsen_US
dc.title.alternativeÖklidce mensup olmayan operatörler bazında konvolüsyonel sinir ağılarıen_US
dc.typeThesisen_US
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.publisherBilkent Universityen_US
dc.description.degreeM.S.en_US
dc.identifier.itemidB157365
dc.embargo.release2020-01-08


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record