Browsing by Subject "LSH"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Embargo Hardware acceleration for Swin Transformers at the edge(2024-05) Esergün, YunusWhile deep learning models have greatly enhanced visual processing abilities, their implementation in edge environments with limited resources can be challenging due to their high energy consumption and computational requirements. Swin Transformer is a prominent mechanism in computer vision that differs from traditional convolutional approaches. It adopts a hierarchical approach to interpreting images. A common strategy that improves the efficiency of deep learning algorithms during inference is clustering. Locality-Sensitive Hashing (LSH) is a mechanism that implements clustering and leverages the inherent redundancy within Transformers to identify and exploit computational similarities. This the-sis introduces a hardware accelerator for Swin Transformer implementation with LSH in edge computing settings. The main goal is to reduce energy consumption while improving performance with custom hardware components. Specifically, our custom hardware accelerator design utilizes LSH clustering in Swin Transformers to decrease the amount of computation required. We tested our accelerator with two different state-of-the-art datasets, namely, Imagenet-1K and CIFAR-100. Our results demonstrate that the hardware accelerator enhances the processing speed of the Swin Transformer when compared to GPU-based implementations. More specifically, our accelerator improves performance by 1.35x while reducing the power consumption to 5-6 Watts instead of 19 Watts in the baseline GPU setting. We observe these improvements with a negligible decrease in model accuracy of less than 1%, confirming the effectiveness of our hardware accelerator design in edge computing environments with limited resources.