Browsing by Author "Hong, S."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Food prosumption technologies: A symbiotic lens for a degrowth transition(SAGE Publications Ltd, 2023) Vicdan, H.; Ulusoy, E.; Tillotson, J. S.; Hong, S.; Ekici, Ahmet; Mimoun, L.Prosumption is gaining momentum among the critical accounts of sustainable consumption that have thus far enriched the marketing discourse. Attention to prosumption is increasing whilst the degrowth movement is emerging to tackle the contradictions inherent in growth-driven, technology-fueled, and capitalist modes of sustainable production and consumption. In response to dominant critical voices that portray technology as counter to degrowth living, we propose an alternative symbiotic lens with which to reconsider the relations between technology, prosumption, and degrowth living, and assess how a degrowth transition in the context of food can be carried out at the intersection of human–nature–technology. We contribute to the critical debates on prosumption in marketing by analyzing the potentials and limits of technology-enabled food prosumption for a degrowth transition through the degrowth principles of conviviality and appropriateness. Finally, we consider the sociopolitical challenges involved in mobilizing such technologies to achieve symbiosis and propose a future research agenda.Item Open Access Process variation aware thread mapping for chip multiprocessors(IEEE, 2009-04) Hong, S.; Narayanan, S. H. K.; Kandemir, M.; Özturk, ÖzcanWith the increasing scaling of manufacturing technology, process variation is a phenomenon that has become more prevalent. As a result, in the context of Chip Multiprocessors (CMPs) for example, it is possible that identically-designed processor cores on the chip have non-identical peak frequencies and power consumptions. To cope with such a design, each processor can be assumed to run at the frequency of the slowest processor, resulting in wasted computational capability. This paper considers an alternate approach and proposes an algorithm that intelligently maps (and remaps) computations onto available processors so that each processor runs at its peak frequency. In other words, by dynamically changing the thread-to-processor mapping at runtime, our approach allows each processor to maximize its performance, rather than simply using chip-wide lowest frequency amongst all cores and highest cache latency. Experimental evidence shows that, as compared to a process variation agnostic thread mapping strategy, our proposed scheme achieves as much as 29% improvement in overall execution latency, average improvement being 13% over the benchmarks tested. We also demonstrate in this paper that our savings are consistent across different processor counts, latency maps, and latency distributions.With the increasing scaling of manufacturing technology, process variation is a phenomenon that has become more prevalent. As a result, in the context of Chip Multiprocessors (CMPs) for example, it is possible that identically-designed processor cores on the chip have non-identical peak frequencies and power consumptions. To cope with such a design, each processor can be assumed to run at the frequency of the slowest processor, resulting in wasted computational capability. This paper considers an alternate approach and proposes an algorithm that intelligently maps (and remaps) computations onto available processors so that each processor runs at its peak frequency. In other words, by dynamically changing the thread-to-processor mapping at runtime, our approach allows each processor to maximize its performance, rather than simply using chip-wide lowest frequency amongst all cores and highest cache latency. Experimental evidence shows that, as compared to a process variation agnostic thread mapping strategy, our proposed scheme achieves as much as 29% improvement in overall execution latency, average improvement being 13% over the benchmarks tested. We also demonstrate in this paper that our savings are consistent across different processor counts, latency maps, and latency distributions. © 2009 EDAA.