Browsing by Subject "Visual Tracking"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Ground-nesting insects could use visual tracking for monitoring nest position during learning flights(Springer Verlag, 2014-07) Samet, Nermin; Zeil, J.; Mair, E.; Boeddeker, N.; Stürzl, W.Ants, bees and wasps are central place foragers. They leave their nests to forage and routinely return to their home-base. Most are guided by memories of the visual panorama and the visual appearance of the local nest environment when pinpointing their nest. These memories are acquired during highly structured learning walks or flights that are performed when leaving the nest for the first time or whenever the insects had difficulties finding the nest during their previous return. Ground-nesting bees and wasps perform such learning flights daily when they depart for the first time. During these flights, the insects turn back to face the nest entrance and subsequently back away from the nest while flying along ever increasing arcs that are centred on the nest. Flying along these arcs, the insects counter-turn in such a way that the nest entrance is always seen in the frontal visual field at slightly lateral positions. Here we asked how the insects may achieve keeping track of the nest entrance location given that it is a small, inconspicuous hole in the ground, surrounded by complex natural structures that undergo unpredictable perspective transformations as the insect pivots around the area and gains distance from it. We reconstructed the natural visual scene experienced by wasps and bees during their learning flights and applied a number of template-based tracking methods to these image sequences. We find that tracking with a fixed template fails very quickly in the course of a learning flight, but that continuously updating the template allowed us to reliably estimate nest direction in reconstructed image sequences. This is true even for later sections of learning flights when the insects are so far away from the nest that they cannot resolve the nest entrance as a visual feature. We discuss why visual goal-anchoring is likely to be important during the acquisition of visual-spatial memories and describe experiments to test whether insects indeed update nest-related templates during their learning flights. © 2014 Springer International Publishing Switzerland.Item Open Access Using shape information from natural tree landmarks for improving SLAM performance(2012) Turan, BilalLocalization and mapping are crucial components for robotic autonomy. However, such robots must often function in remote, outdoor areas with no a-priori knowledge of the environment. Consequently, it becomes necessary for field robots to be able to construct their own maps based on exteroceptive sensor readings. To this end, visual sensing and mapping through naturally occurring landmarks have distinct advantages. With the availability of high bandwidth data provided by visual sensors, meaningful and uniquely identifiable objects can be detected. This improves the construction of maps consisting of natural landmarks that are meaningful for human readers as well. In this thesis, we focus on the use of trees in an outdoor environment as a suitable set of landmarks for Simultaneous Localization and Mapping (SLAM). Trees have a relatively simple, near vertical structure which makes them easily and consistently detectable. Furthermore, the thickness of a tree can be accurately determined from different viewpoints. Our primary contribution is the usage of the width of a tree trunk as an additional sensory reading, allowing us to include the radius of tree trunks on the map. To this end, we introduce a new sensor model that relates the width of a tree landmark on the image plane to the radius of its trunk. We provide a mathematical formulation of this model, derive associated Jacobians and incorporate our sensor model into a working EKF SLAM implementation. Through simulations we show that the use of this new sensory reading improves the accuracy of both the map and the trajectory estimates without additional sensor hardware other than a monocular camera.