Mallah, Maen M. A.2018-01-092018-01-092018-012018-012018-01-08http://hdl.handle.net/11693/35722Cataloged from PDF version of article.Thesis (M.S.): Bilkent University, Department of Electrical and Electronics Engineering, İhsan Doğramacı Bilkent University, 2018.Includes bibliographical references (leaves 64-69).Artificial Neural Networks, commonly known as Neural Networks (NNs), have become popular in the last decade for their achievable accuracies due to their ability to generalize and respond to unexpected patterns. In general, NNs are computationally expensive. This thesis presents the implementation of a class of NN that do not require multiplication operations. We describe an implementation of a Multiplication Free Neural Network (MFNN), in which multiplication operations are replaced by additions and sign operations. This thesis focuses on the FPGA and ASIC implementation of the MFNN using VHDL. A detailed description of the proposed hardware design of both NNs and MFNNs is analyzed. We compare 3 dfferent hardware designs of the neuron (serial, parallel and hybrid), based on latency/hardware resources trade-off. We show that one-hidden-layer MFNNs achieve the same accuracy as its counterpart NN using the same number of neurons. The hardware implementation shows that MFNNs are more energy efficient than the ordinary NNs, because multiplication is more computationally demanding compared to addition and sign operations. MFNNs save a significant amount of energy without degrading the accuracy. The fixed-point quantization is discussed along with the number of bits required for both NNs and MFNNs to achieve floating-point recognition performance.xii, 76 leaves : charts (some color) ; 30 cmEnglishinfo:eu-repo/semantics/openAccessNeural NetworksMachine LearningClassificationVHDLEnergyFixed-pointFloating-pointMultiplication free neural networksÇarpma işlemsiz sinir ağlarıThesisB157344