Browsing by Author "Afrasiyabi, A."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access An energy efficient additive neural network(IEEE, 2017) Afrasiyabi, A.; Nasir, B.; Yıldız, O.; Yarman-Vural, F. T.; Çetin, A. EnisIn this paper, we propose a new energy efficient neural network with the universal approximation property over space of Lebesgue integrable functions. This network, called additive neural network, is very suitable for mobile computing. The neural structure is based on a novel vector product definition, called ef-operator, that permits a multiplier-free implementation. In ef-operation, the 'product' of two real numbers is defined as the sum of their absolute values, with the sign determined by the sign of the product of the numbers. This 'product' is used to construct a vector product in n-dimensional Euclidean space. The vector product induces the lasso norm. The proposed additive neural network successfully solves the XOR problem. The experiments on MNIST dataset show that the classification performances of the proposed additive neural networks are very similar to the corresponding multi-layer perceptron.Item Open Access Non-euclidean vector product for neural networks(IEEE, 2018-04) Afrasiyabi, A.; Badawi, Diaa; Nasır, B.; Yıldız, O.; Yarman- Vural, F. T.; Çetin, A. EnisWe present a non-Euclidean vector product for artificial neural networks. The vector product operator does not require any multiplications while providing correlation information between two vectors. Ordinary neurons require inner product of two vectors. We propose a class of neural networks with the universal approximation property over the space of Lebesgue integrable functions based on the proposed non-Euclidean vector product. In this new network, the 'product' of two real numbers is defined as the sum of their absolute values, with the sign determined by the sign of the product of the numbers. This 'product' is used to construct a vector product in RN. The vector product induces the l1 norm. The additive neural network successfully solves the XOR problem. Experiments on MNIST and CIFAR datasets show that the classification performance of the proposed additive neural network is comparable to the corresponding multi-layer perceptron and convolutional neural networks.