Understanding how orthogonality of parameters improves quantization of neural networks
Date
2022-05-10Source Title
IEEE Transactions on Neural Networks and Learning Systems
Print ISSN
2162-237X
Electronic ISSN
2162-2388
Publisher
IEEE
Pages
1 - 10
Language
English
Type
ArticleItem Usage Stats
8
views
views
8
downloads
downloads
Abstract
We analyze why the orthogonality penalty improves quantization in deep neural networks. Using results from perturbation theory as well as through extensive experiments with Resnet50, Resnet101, and VGG19 models, we mathematically and experimentally show that improved quantization accuracy resulting from orthogonality constraint stems primarily from reduced condition numbers, which is the ratio of largest to smallest singular values of weight matrices, more so than reduced spectral norms, in contrast to the explanations in previous literature. We also show that the orthogonality penalty improves quantization even in the presence of a state-of-the-art quantized retraining method. Our results show that, when the orthogonality penalty is used with quantized retraining, ImageNet Top5 accuracy loss from 4- to 8-bit quantization is reduced by up to 7% for Resnet50, and up to 10% for Resnet101, compared to quantized retraining with no orthogonality penalty.