Browsing by Subject "Inference"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Open Access Effects of ignorance and information on judgments and decisions(Society for Judgment and Decision Making, 2011) Ayton, P.; Önkal D.; McReynolds, L.We compared Turkish and English students’ soccer forecasting for English soccer matches. Although the Turkish students knew very little about English soccer, they selected teams on the basis of familiarity with the team (or its identified city); their prediction success was surprisingly similar to knowledgeable English students—consistent with Goldstein and Gigerenzer’s (1999; 2002) characterization of the recognition heuristic. The Turkish students made forecasts for some of the matches with additional information—the half-time scores. In this and a further study, where British students predicting matches for foreign teams could choose whether or not to use half-time information, we found that predictions that could be made by recognition alone were influenced by the half-time information. We consider the implications of these findings in the context of Goldstein and Gigerenzer’s (2002, p. 82) suggestion that “. . . no other information can reverse the choice determined by recognition” and a recent more qualified statement (Gigerenzer & Goldstein, 2011) indicating that two processes, recognition and evaluation guide the adaptive selection of the recognition heuristic.Item Open Access General reuse-centric CNN accelerator(2021-02) Çiçek, Nihat MertReuse-centric CNN acceleration speeds up CNN inference by reusing computa-tions for similar neuron vectors in CNN’s input layer or activation maps. This new paradigm of optimizations is however largely limited by the overheads in neuron vector similarity detection, an important step in reuse-centric CNN. This thesis presents the first in-depth exploration of architectural support for reuse-centric CNN. It proposes a hardware accelerator, which improves neuron vector similar-ity detection and reduces the energy consumption of reuse-centric CNN inference. The accelerator is implemented to support a wide variety of network settings with a banked memory subsystem. Design exploration is performed through RTL sim-ulation and synthesis on an FPGA platform. When integrated into Eyeriss, the accelerator can potentially provide improvements up to 7.75X in performance. Furthermore, it can make the similarity detection up to 95.46% more energy-eÿcient, and it can accelerate the convolutional layer up to 3.63X compared to the software-based implementation running on the CPU.Item Embargo Hardware acceleration for Swin Transformers at the edge(2024-05) Esergün, YunusWhile deep learning models have greatly enhanced visual processing abilities, their implementation in edge environments with limited resources can be challenging due to their high energy consumption and computational requirements. Swin Transformer is a prominent mechanism in computer vision that differs from traditional convolutional approaches. It adopts a hierarchical approach to interpreting images. A common strategy that improves the efficiency of deep learning algorithms during inference is clustering. Locality-Sensitive Hashing (LSH) is a mechanism that implements clustering and leverages the inherent redundancy within Transformers to identify and exploit computational similarities. This the-sis introduces a hardware accelerator for Swin Transformer implementation with LSH in edge computing settings. The main goal is to reduce energy consumption while improving performance with custom hardware components. Specifically, our custom hardware accelerator design utilizes LSH clustering in Swin Transformers to decrease the amount of computation required. We tested our accelerator with two different state-of-the-art datasets, namely, Imagenet-1K and CIFAR-100. Our results demonstrate that the hardware accelerator enhances the processing speed of the Swin Transformer when compared to GPU-based implementations. More specifically, our accelerator improves performance by 1.35x while reducing the power consumption to 5-6 Watts instead of 19 Watts in the baseline GPU setting. We observe these improvements with a negligible decrease in model accuracy of less than 1%, confirming the effectiveness of our hardware accelerator design in edge computing environments with limited resources.Item Open Access Towards situation-oriented programming languages(ACM, 1995) Tin, E.; Akman, V.; Ersan, M.Recently, there have been some attempts towards developing programming languages based on situation theory. These languages employ situation-theoretic constructs with varying degrees of divergence from the ontology of the theory. In this paper, we review three of these programming languages.