Show simple item record

dc.contributor.advisorÇetin, A. Enis
dc.contributor.authorMallah, Maen M. A.
dc.date.accessioned2018-01-09T10:53:04Z
dc.date.available2018-01-09T10:53:04Z
dc.date.copyright2018-01
dc.date.issued2018-01
dc.date.submitted2018-01-08
dc.identifier.urihttp://hdl.handle.net/11693/35722
dc.descriptionCataloged from PDF version of article.en_US
dc.descriptionThesis (M.S.): Bilkent University, Department of Electrical and Electronics Engineering, İhsan Doğramacı Bilkent University, 2018.en_US
dc.descriptionIncludes bibliographical references (leaves 64-69).en_US
dc.description.abstractArtificial Neural Networks, commonly known as Neural Networks (NNs), have become popular in the last decade for their achievable accuracies due to their ability to generalize and respond to unexpected patterns. In general, NNs are computationally expensive. This thesis presents the implementation of a class of NN that do not require multiplication operations. We describe an implementation of a Multiplication Free Neural Network (MFNN), in which multiplication operations are replaced by additions and sign operations. This thesis focuses on the FPGA and ASIC implementation of the MFNN using VHDL. A detailed description of the proposed hardware design of both NNs and MFNNs is analyzed. We compare 3 dfferent hardware designs of the neuron (serial, parallel and hybrid), based on latency/hardware resources trade-off. We show that one-hidden-layer MFNNs achieve the same accuracy as its counterpart NN using the same number of neurons. The hardware implementation shows that MFNNs are more energy efficient than the ordinary NNs, because multiplication is more computationally demanding compared to addition and sign operations. MFNNs save a significant amount of energy without degrading the accuracy. The fixed-point quantization is discussed along with the number of bits required for both NNs and MFNNs to achieve floating-point recognition performance.en_US
dc.description.statementofresponsibilityby Maen M. A. Mallah.en_US
dc.format.extentxii, 76 leaves : charts (some color) ; 30 cmen_US
dc.language.isoen_USen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectNeural Networksen_US
dc.subjectMachine Learningen_US
dc.subjectClassificationen_US
dc.subjectVHDLen_US
dc.subjectEnergyen_US
dc.subjectFixed-pointen_US
dc.subjectFloating-pointen_US
dc.titleMultiplication free neural networksen_US
dc.title.alternativeÇarpma işlemsiz sinir ağlarıen_US
dc.typeThesisen_US
dc.departmentDepartment of Electrical and Electronics Engineeringen_US
dc.publisherBilkent Universityen_US
dc.description.degreeM.S.en_US
dc.identifier.itemidB157344
dc.embargo.release2020-01-05


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record