Browsing by Subject "Surface reflectance"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Effects of surface reflectance and 3D shape on perceived rotation axis(Association for Research in Vision and Ophthalmology, 2013) Doerschner, K.; Yilmaz, O.; Kucukoglu, G.; Fleming, R. W.Surface specularity distorts the optic flow generated by a moving object in a way that provides important cues for identifying surface material properties (Doerschner, Fleming et al., 2011). Here we show that specular flow can also affect the perceived rotation axis of objects. In three experiments, we investigate how threedimensional shape and surface material interact to affect the perceived rotation axis of unfamiliar irregularly shaped and isotropic objects. We analyze observers' patterns of errors in a rotation axis estimation task under four surface material conditions: shiny, matte textured, matte untextured, and silhouette. In addition to the expected large perceptual errors in the silhouette condition, we find that the patterns of errors for the other three material conditions differ from each other and across shape category, yielding the largest differences in error magnitude between shiny and matte, textured isotropic objects. Rotation axis estimation is a crucial implicit computational step to perceive structure from motion; therefore, we test whether a structure from a motion-based model can predict the perceived rotation axis for shiny and matte, textured objects. Our model's predictions closely follow observers' data, even yielding the same reflectance-specific perceptual errors. Unlike previous work (Caudek & Domini, 1998), our model does not rely on the assumption of affine image transformations; however, a limitation of our approach is its reliance on projected correspondence, thus having difficulty in accounting for the perceived rotation axis of smooth shaded objects and silhouettes. In general, our findings are in line with earlier research that demonstrated that shape from motion can be extracted based on several different types of optical deformation (Koenderink & Van Doorn, 1976; Norman & Todd, 1994; Norman, Todd, & Orban, 2004; Pollick, Nishida, Koike, & Kawato, 1994; Todd, 1985).Item Open Access Rapid classification of surface reflectance from image velocities(Springer, Berlin, Heidelberg, 2009) Doerschner, Katja; Kersten, D.; Schrater P.We propose a method for rapidly classifying surface reflectance directly from the output of spatio-temporal filters applied to an image sequence of rotating objects. Using image data from only a single frame, we compute histograms of image velocities and classify these as being generated by a specular or a diffusely reflecting object. Exploiting characteristics of material-specific image velocities we show that our classification approach can predict the reflectance of novel 3D objects, as well as human perception. © 2009 Springer Berlin Heidelberg.Item Open Access Seeing through transparent layers(Association for Research in Vision and Ophthalmology, 2018) Dövencioğlu, Dicle N.; Van Doorn, A.; Koenderink, J.; Doerschner, KatjaThe human visual system is remarkably good at decomposing local and global deformations in the flow of visual information into different perceptual layers, a critical ability for daily tasks such as driving through rain or fog or catching that evasive trout. In these scenarios, changes in the visual information might be due to a deforming object or deformations due to a transparent medium, such as structured glass or water, or a combination of these. How does the visual system use image deformations to make sense of layering due to transparent materials? We used eidolons to investigate equivalence classes for perceptually similar transparent layers. We created a stimulus space for perceptual equivalents of a fiducial scene by systematically varying the local disarray parameters reach and grain. This disarray in eidolon space leads to distinct impressions of transparency, specifically, high reach and grain values vividly resemble water whereas smaller grain values appear diffuse like structured glass. We asked observers to adjust image deformations so that the objects in the scene looked like they were seen (a) under water, (b) behind haze, or (c) behind structured glass. Observers adjusted image deformation parameters by moving the mouse horizontally (grain) and vertically (reach). For two conditions, water and glass, we observed high intraobserver consistency: responses were not random. Responses yielded a concentrated equivalence class for water and structured glass.