Biased competition in semantic representation during natural visual search
Item Usage Stats
MetadataShow full item record
Humans divide their attention among multiple visual targets in daily life, and visual search can get more difficult as the number of targets increases. The biased competition hypothesis (BC) has been put forth as an explanation for this phenomenon. BC suggests that brain responses during divided attention are a weighted linear combination of the responses during search for each target individually. This combination is assumed to be biased by the intrinsic selectivity of cortical regions. Yet, it is unknown whether attentional modulation of semantic representations are consistent with this hypothesis when viewing cluttered, dynamic natural scenes. Here, we investigated whether BC accounts for semantic representation during natural category-based visual search. Subjects viewed natural movies, and their whole-brain BOLD responses were recorded while they attended to “humans”, “vehicles” (i.e. single-target attention tasks), or “both humans and vehicles” (i.e. divided attention) in separate runs. We computed a voxelwise linearity index to assess whether semantic representation during divided attention can be modeled as a weighted combination of representations during the two single-target attention tasks. We then examined the bias in weights of this linear combination across cortical ROIs. We find that semantic representations of both target and nontarget categories during divided attention are linear to a substantial degree, and that they are biased toward the preferred target in category-selective areas across ventral temporal cortex. Taken together, these results suggest that the biased competition hypothesis is a compelling account for attentional modulation of semantic representations.