• About
  • Policies
  • What is open access
  • Library
  • Contact
Advanced search
      View Item 
      •   BUIR Home
      • University Library
      • Bilkent Theses
      • Theses - Department of Electrical and Electronics Engineering
      • Dept. of Electrical and Electronics Engineering - Master's degree
      • View Item
      •   BUIR Home
      • University Library
      • Bilkent Theses
      • Theses - Department of Electrical and Electronics Engineering
      • Dept. of Electrical and Electronics Engineering - Master's degree
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Covolutional neural networks based on non-euclidean operators

      Thumbnail
      Embargo Lift Date: 2020-01-08
      View / Download
      915.8 Kb
      Author(s)
      Badawi, Diaa Hisham Jamil
      Advisor
      Çetin, Ahmet Enis
      Date
      2018-01
      Publisher
      Bilkent University
      Language
      English
      Type
      Thesis
      Item Usage Stats
      137
      views
      97
      downloads
      Abstract
      Dot product-based operations in neural net feedforwarding passes are replaced with an ℓ₁-norm inducing operator, which itself is multiplication-free. The neural net, which is called AddNet, retains attributes of ℓ₁-norm based feature extraction schemes such as resilience against outliers. Furthermore, feedforwarding passes can be realized using fewer multiplication operations, which implies energy efficiency. The ℓ₁-norm inducing operator is differentiable w.r.t its operands almost everywhere. Therefore, it is possible to use it in neural nets that are to be trained through standard backpropagation algorithm. AddNet requires scaling (multiplicative) bias so that cost gradients do not explode during training. We present different choices for multiplicative bias: trainable, directly dependent upon the associated weights, or fixed. We also present a sparse variant of that operator, where partial or full binarization of weights is achievable. We ran our experiments over MNIST and CIFAR-10 datasets. AddNet could achieve results that are 0:1% less accurate than a ordinary CNN. Furthermore, trainable multiplicative bias helps the network to converge fast. In comparison with other binary-weights neural nets, AddNet achieves better results even with full or almost full weight magnitude pruning while keeping the sign information after training. As for experimenting on CIFAR-10, AddNet achieves accuracy 5% less than a ordinary CNN. Nevertheless, AddNet is more rigorous against impulsive noise data corruption and it outperforms the corresponding ordinary CNN in the presence of impulsive noise, even at small levels of noise.
      Keywords
      Deep Learning
      Convolutional Neural Network
      ℓ₁ Norm
      Energy Efficiency
      Binary Weights
      Impulsive Noise
      Permalink
      http://hdl.handle.net/11693/35726
      Collections
      • Dept. of Electrical and Electronics Engineering - Master's degree 597
      Show full item record

      Browse

      All of BUIRCommunities & CollectionsTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsThis CollectionTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartments

      My Account

      LoginRegister

      Statistics

      View Usage StatisticsView Google Analytics Statistics

      Bilkent University

      If you have trouble accessing this page and need to request an alternate format, contact the site administrator. Phone: (312) 290 1771
      © Bilkent University - Library IT

      Contact Us | Send Feedback | Off-Campus Access | Admin | Privacy