What captures our attention? How communicative and non-communicative actions by human and robot agents influence visual attention under perceptual load

buir.advisorUrgen, Burcu Ayşen
dc.contributor.authorKaraduman, Tuvana Dilan
dc.date.accessioned2025-08-11T08:50:38Z
dc.date.available2025-08-11T08:50:38Z
dc.date.copyright2025-08
dc.date.issued2025-08
dc.date.submitted2025-08-08
dc.descriptionCataloged from PDF version of article.
dc.descriptionIncludes bibliographical references (leaves 52-61).
dc.description.abstractThis study investigated how agent type (human vs. robot), communicativeness (communicative vs. non-communicative), and perceptual load (low vs. high) interactively influence visual attention and task performance. A 2 (Agent) × 2 (Communicativeness) × 2 (Perceptual Load) within-subjects design was employed with 34 participants. To examine attentional allocation under perceptual load, Areas of Interest (AOIs) were defined for the central letter array and the peripheral agent videos. Behavioral performance (reaction time, accuracy) and multiple eye-tracking metrics (total dwell time, dwell time per fixation, first fixation latency, saccadic velocity, pupil dilation) were analyzed. Behavioral results confirmed the effect of the perceptual load manipulation, with participants being significantly less accurate and slower in high-loaded conditions. In the presence of a distractor, reaction times and accuracy were further modulated by significant interactions between agent, perceptual load, and communicativeness. Eye-tracking analyses revealed that initial orienting was driven by the communicativeness and perceptual load, when communicative cues captured attention faster only under high load. Dwell time per fixation remained stable across the conditions. On the other hand, total dwell time revealed a critical three-way interaction. Under high load, communicative actions performed by humans increase total dwell time, whereas if the action is performed by a robot, dwell time decreases. Pupil data supported this, indicating that under high load, non-communicative actions required more cognitive effort than the communicative ones. Initial orienting of eye metrics was guided by task demand; overall engagement with the nature of the action is a product of a complex and sensitive cognitive process sensitive to perceptual load. Robotic social cues are processed dissimilar compared to human ones.
dc.description.statementofresponsibilityby Tuvana Dilan Karaduman
dc.format.extentxiii, 62 leaves : illustrations, charts ; 30 cm.
dc.identifier.itemidB163164
dc.identifier.urihttps://hdl.handle.net/11693/117430
dc.language.isoEnglish
dc.subjectVisual attention
dc.subjectPerceptual load theory
dc.subjectBiological motions
dc.subjectHuman-robot interaction
dc.subjectEye tracking
dc.subjectDistraction
dc.titleWhat captures our attention? How communicative and non-communicative actions by human and robot agents influence visual attention under perceptual load
dc.title.alternativeDikkatimizi ne çekiyor? Robot ve insan tarafından yapılan iletişim içeren ve içermeyen hareketlerin farklı algısal yük koşullarında görsel dikkat incelemesi
dc.typeThesis
thesis.degree.disciplineNeuroscience
thesis.degree.grantorBilkent University
thesis.degree.levelMaster's
thesis.degree.nameMS (Master of Science)

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
B163164.pdf
Size:
8.11 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.1 KB
Format:
Item-specific license agreed upon to submission
Description: