Browsing by Subject "Orientation estimation"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Activity recognition invariant to position and orientation of wearable motion sensor units(2019-04) Yurtman, ArasWe propose techniques that achieve invariance to the placement of wearable motion sensor units in the context of human activity recognition. First, we focus on invariance to sensor unit orientation and develop three alternative transformations to remove from the raw sensor data the effect of the orientation at which the sensor unit is placed. The first two orientation-invariant transformations rely on the geometry of the measurements, whereas the third is based on estimating the orientations of the sensor units with respect to the Earth frame by exploiting the physical properties of the sensory data. We test them with multiple state-of-the-art machine-learning classifiers using five publicly available datasets (when applicable) containing various types of activities acquired by different sensor configurations. We show that the proposed methods achieve a similar accuracy with the reference system where the units are correctly oriented, whereas the standard system cannot handle incorrectly oriented sensors. We also propose a novel non-iterative technique for estimating the orientations of the sensor units based on the physical and geometrical properties of the sensor data to improve the accuracy of the third orientation-invariant transformation. All of the three transformations can be integrated into the pre-processing stage of existing wearable systems without much effort since we do not make any assumptions about the sensor configuration, the body movements, and the classification methodology. Secondly, we develop techniques that achieve invariance to the positioning of the sensor units in three ways: (1) We propose transformations that are applied on the sensory data to allow each unit to be placed at any position within a pre-determined body part. (2) We propose a transformation technique to allow the units to be interchanged so that the user does not need to distinguish between them before positioning. (3) We employ three different techniques to classify the activities based on a single sensor unit, whereas the training set may contain data acquired by multiple units placed at different positions. We combine (1) with (2) and also with (3) to achieve further robustness to sensor unit positioning. We evaluate our techniques on a publicly available dataset using seven state-of-the-art classifiers and show that the reduction in the accuracy is acceptable, considering the exibility, convenience, and unobtrusiveness in the positioning of the units. Finally, we combine the position- and orientation-invariant techniques to simultaneously achieve both. The accuracy values are much higher than those of random decision making although some of them are significantly lower than the reference system with correctly placed units. The trade-off between the exibility in sensor unit placement and the classification accuracy indicates that different approaches may be suitable for different applications.Item Open Access Automatic detection and segmentation of orchards using very high resolution imagery(Institute of Electrical and Electronics Engineers, 2012-08) Aksoy, S.; Yalniz, I. Z.; Tasdemir, K.Spectral information alone is often not sufficient to distinguish certain terrain classes such as permanent crops like orchards, vineyards, and olive groves from other types of vegetation. However, instances of these classes possess distinctive spatial structures that can be observable in detail in very high spatial resolution images. This paper proposes a novel unsupervised algorithm for the detection and segmentation of orchards. The detection step uses a texture model that is based on the idea that textures are made up of primitives (trees) appearing in a near-regular repetitive arrangement (planting patterns). The algorithm starts with the enhancement of potential tree locations by using multi-granularity isotropic filters. Then, the regularity of the planting patterns is quantified using projection profiles of the filter responses at multiple orientations. The result is a regularity score at each pixel for each granularity and orientation. Finally, the segmentation step iteratively merges neighboring pixels and regions belonging to similar planting patterns according to the similarities of their regularity scores and obtains the boundaries of individual orchards along with estimates of their granularities and orientations. Extensive experiments using Ikonos and QuickBird imagery as well as images taken from Google Earth show that the proposed algorithm provides good localization of the target objects even when no sharp boundaries exist in the image data. © 2012 IEEE.Item Open Access Novel noniterative orientation estimation for wearable motion sensor units acquiring accelerometer, gyroscope, and magnetometer measurements(IEEE, 2020) Yurtman, Aras; Barshan, BillurWe propose a novel noniterative orientation estimation method based on the physical and geometrical properties of the acceleration, angular rate, and magnetic field vectors to estimate the orientation of motion sensor units. The proposed algorithm aims that the vertical (up) axis of the earth coordinate frame is as close as possible to the measured acceleration vector and that the north axis of the earth makes an angle with the detected magnetic field vector as close as possible to the estimated value of the magnetic dip angle. We obtain the sensor unit orientation based on the rotational quaternion transformation between the earth and the sensor unit frames. We evaluate the proposed method by incorporating it into an activity recognition scheme for daily and sports activities, which requires accurately estimated sensor unit orientations to achieve invariance to the orientations at which the units are worn on the body. Using four different classifiers on a publicly available data set, the proposed methodology achieves an average activity recognition accuracy higher than the state-of-the-art methods, as well as being computationally efficient enough to be executed in real time.