Browsing by Subject "Three dimensional computer graphics"
Now showing 1 - 18 of 18
- Results Per Page
- Sort Options
Item Open Access A 3D dynamic model of a spherical wheeled self-balancing robot(2012) İnal, Ali Nail; Morgül, Ömer; Saranlı, UluçMobility through balancing on spherical wheels has recently received some attention in the robotics literature. Unlike traditional wheeled platforms, the operation of such platforms depends heavily on understanding and working with system dynamics, which have so far been approximated with simple planar models and their decoupled extension to three dimensions. Unfortunately, such models cannot capture inherently spatial aspects of motion such as yaw motion arising from the wheel rolling motion or coupled inertial effects for fast maneuvers. In this paper, we describe a novel, fully-coupled 3D model for such spherical wheeled platforms and show that it not only captures relevant spatial aspects of motion, but also provides a basis for controllers better informed by system dynamics. We focus our evaluations to simulations with this model and use circular paths to reveal advantages of this model in dynamically rich situations. © 2012 IEEE.Item Open Access 3D thumbnails for mobile media browser interface with autostereoscopic displays(Springer, 2010-01) Gündoğdu, R. Bertan; Yiğit, Yeliz; Çapin, TolgaIn this paper, we focus on the problem of how to visualize and browse 3D videos and 3D images in a media browser application, running on a 3D-enabled mobile device with an autostereoscopic display. We propose a 3D thumbnail representation format and an algorithm for automatic 3D thumbnail generation from a 3D video + depth content. Then, we present different 3D user interface layout schemes for 3D thumbnails, and discuss these layouts with the focus on their usability and ergonomics. © 2010 Springer-Verlag Berlin Heidelberg.Item Open Access Bina tahsis planlarından 3-boyutlu şehir modellerinin üretilmesi ve görüntülenmesi(IEEE, 2006-04) Oǧuz, Oğuzcan; Aran, Medeni Erol; Yilmaz, Türker; Güdükbay, UğurThis paper presents a method for the automatic generation of different building models to be used to populate virtual cities and a system of visualization of generated city models. The proposed method incorporates randomness but the derivation process could be steered by the help of derivation rules and assigned attributes. The derivation method is inspired by the shape grammars. During the derivation process, the floor plans of the actual cities are used to generate 3D city models. Given the city plans, the derivation rules and definitions of some basic nesnects, the system generates 3D building models and the generated city model could be visualized. © 2006 IEEE.Item Open Access Dual-finger 3D interaction techniques for mobile devices(Springer U K, 2013) Telkenaroglu, C.; Capin, T.Three-dimensional capabilities on mobile devices are increasing, and the interactivity is becoming a key feature of these tools. It is expected that users will actively engage with the 3D content, instead of being passive consumers. Because touch-screens provide a direct means of interaction with 3D content by directly touching and manipulating 3D graphical elements, touch-based interaction is a natural and appealing style of input for 3D applications. However, developing 3D interaction techniques for handheld devices using touch-screens is not a straightforward task. One issue is that when interacting with 3D objects, users occlude the object with their fingers. Furthermore, because the user's finger covers a large area of the screen, the smallest size of the object users can touch is limited. In this paper, we first inspect existing 3D interaction techniques based on their performance with handheld devices. Then, we present a set of precise Dual-Finger 3D Interaction Techniques for a small display. Finally, we present the results of an experimental study, where we evaluate the usability, performance, and error rate of the proposed and existing 3D interaction techniques. © Springer-Verlag London Limited 2012.Item Open Access Effect of sample locations on computation of the exact scalar diffraction field (in English)(IEEE, 2012) Esmer, G. B.; Özaktaş, Haldun M.; Onural, LeventComputer generated holography is one of common methods to obtain three-dimensional visualization. It can be explained by behavior of propagating waves and interference. To calculate the scalar diffraction pattern on a hologram, there are myriad of algorithms in the literature. Some of them employ several approximations, so the calculated fields may not be the exact scalar diffraction field. However, there are algorithms to compute the exact scalar diffraction field with some limitations on the distribution of the given samples over the space. These algorithms are based on "field model" approach. The performance of an algorithm, based on field model, is investigated according to the distribution of given samples over the space. From the simulations, it was observed that the cumulative information provided by the given samples has to be enough to solve the inverse scalar diffraction field. The cumulative information can be increased by having more samples, but there are some scenarios that differential information obtained from the given samples can be infinitesimal, thus the exact diffraction field may not be computed. © 2012 IEEE.Item Open Access Estimation of depth fields suitable for video compression based on 3-D structure and motion of objects(Institute of Electrical and Electronics Engineers, 1998-06) Alatan, A. A.; Onural, L.Intensity prediction along motion trajectories removes temporal redundancy considerably in video compression algorithms. In three-dimensional (3-D) object-based video coding, both 3-D motion and depth values are required for temporal prediction. The required 3-D motion parameters for each object are found by the correspondence-based E-matrix method. The estimation of the correspondences - two-dimensional (2-D) motion field - between the frames and segmentation of the scene into objects are achieved simultaneously by minimizing a Gibbs energy. The depth field is estimated by jointly minimizing a defined distortion and bitrate criterion using the 3-D motion parameters. The resulting depth field is efficient in the rate-distortion sense. Bit-rate values corresponding to the lossless encoding of the resultant depth fields are obtained using predictive coding; prediction errors are encoded by a Lempel-Ziv algorithm. The results are satisfactory for real-life video scenes.Item Open Access Example-based retargeting of human motion to arbitrary mesh models(Blackwell Publishing Ltd, 2015) Celikcan, U.; Yaz I.O.; Capin, T.We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion-retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose-to-pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. Also considering the fact that most publicly available mesh models lack additional structure (e.g. skeleton), our method dispenses with the need for such a structure by means of a built-in surface-based deformation system. As deformation for animation purposes may require non-rigid behaviour, we augment existing rigid deformation approaches to provide volume-preserving and squash-and-stretch deformations. We demonstrate our approach on well-known mesh models along with several publicly available motion-capture sequences. We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion-retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose-to-pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. © 2014 The Eurographics Association and John Wiley & Sons Ltd.Item Open Access Extraction of 3D navigation space in virtual urban environments(IEEE, 2005-09) Yılmaz, Türker; Güdükbay, UğurUrban scenes are one class of complex geometrical environments in computer graphics. In order to develop navigation systems for urban sceneries, extraction and cellulization of navigation space is one of the most commonly used technique providing a suitable structure for visibility computations. Surprisingly, there is not much work done for the extraction of the navigable area automatically. Urban models, except for the ones where the building footprints are used to generate the model, generally lack navigation space information. Because of this, it is hard to extract and discretize the navigable area for complex urban scenery. In this paper, we propose an algorithm for the extraction of navigation space for urban scenes in threedimensions (3D). Our navigation space extraction algorithm works for scenes, where the buildings are in high complexity. The building models may have pillars or holes where seeing through them is also possible. Besides, for the urban data acquired from different sources which may contain errors, our approach provides a simple and efficient way of discretizing both navigable space and the model itself. The extracted space can instantly be used for visibility calculations such as occlusion culling in 3D space. Furthermore, terrain height field information can be extracted from the resultant structure, hence providing a way to implement urban navigation systems including terrains.Item Open Access A framework for applying the principles of depth perception to information visualization(Association for Computing Machinery, 2013) Zeynep, C. Y.; Bulbul, A.; Capin, T.During the visualization of 3D content, using the depth cues selectively to support the design goals and enabling a user to perceive the spatial relationships between the objects are important concerns. In this novel solution, we automate this process by proposing a framework that determines important depth cues for the input scene and the rendering methods to provide these cues. While determining the importance of the cues, we consider the user's tasks and the scene's spatial layout. The importance of each depth cue is calculated using a fuzzy logic-based decision system. Then, suitable rendering methods that provide the important cues are selected by performing a cost-profit analysis on the rendering costs of the methods and their contribution to depth perception. Possible cue conflicts are considered and handled in the system. We also provide formal experimental studies designed for several visualization tasks. A statistical analysis of the experiments verifies the success of our framework. © 2013 ACM.Item Open Access ILP-based communication reduction for heterogeneous 3D network-on-chips(IEEE, 2013-02-03) Aktürk, İsmail; Öztürk, ÖzcanNetwork-on-Chip (NoC) architectures and three-dimensional integrated circuits (3D ICs) have been introduced as attractive options for overcoming the barriers in interconnect scaling while increasing the number of cores. Combining these two approaches is expected to yield better performance and higher scalability. This paper explores the possibility of combining these two techniques in a heterogeneity aware fashion. We explore how heterogeneous processors can be mapped onto the given 3D chip area to minimize the data access costs. Our initial results indicate that the proposed approach generates promising results within tolerable solution times. © 2013 IEEE.Item Open Access Increasing the sense of presence in a simulation environment using image generators based on visual attention(M I T Press, 2010-12) Ciflikli, B.; İşler, V.; Güdükbay, UğurFlight simulator systems generally use a separate image-generator component. The host is responsible for the positional data updates of the entities and the image generator is responsible for the rendering process. In such systems, the sense of presence is decreased by model flickering. This study presents a method by which the host can minimize model flickering in the image-generator output. The method is based on preexisting algorithms, such as visibility culling and level of detail management of 3D models. The flickering is minimized for the visually important entities at the expense of increasing the flickering of the entities that are out of the user's focus using a new perception-based approach. It is shown through user studies that the new proposed approach increases the participants' sense of presence. © 2011 by the Massachusetts Institute of Technology.Item Open Access Mars: A tool-based modeling, animation, and parallel rendering system(Springer, 1994) Aktıhanoğlu, M.; Özgüç, B.; Aykanat, CevdetThis paper describes a system for modeling, animating, previewing and rendering articulated objects. The system has a modeler of objects that consists of joints and segments. The animator interactively positions the articulated object in its stick, control vertex, or rectangular prism representation and previews the motion in real time. Then the data representing the motion and the models is sent to a multicomputer [iPSC/2 Hypercube (Intel)]. The frames are rendered in parallel, exploiting the coherence between successive frames, thus cutting down the rendering time significantly. Our main aim is to make a detailed study on rendering of a sequence of 3D scenes. The results show that due to an inherent correlation between the 3D scenes, an efficient rendering can be achieved. © 1994 Springer-Verlag.Item Open Access Novel compression algorithm based on sparse sampling of 3-D laser range scans(Oxford University Press, 2013) Dobrucali, O.; Barshan, B.Three-dimensional models of environments can be very useful and are commonly employed in areas such as robotics, art and architecture, facility management, water management, environmental/industrial/urban planning and documentation. A 3-D model is typically composed of a large number of measurements. When 3-D models of environments need to be transmitted or stored, they should be compressed efficiently to use the capacity of the communication channel or the storage medium effectively. We propose a novel compression technique based on compressive sampling applied to sparse representations of 3-D laser range measurements. The main issue here is finding highly sparse representations of the range measurements, since they do not have such representations in common domains, such as the frequency domain. To solve this problem, we develop a new algorithm to generate sparse innovations between consecutive range measurements acquired while the sensor moves. We compare the sparsity of our innovations with others generated by estimation and filtering. Furthermore, we compare the compression performance of our lossy compression method with widely used lossless and lossy compression techniques. The proposed method offers a small compression ratio and provides a reasonable compromise between the reconstruction error and processing time. © 2012 The Author 2012. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.Item Open Access Procedural visualization of knitwear and woven cloth(Pergamon Press, 2007-11) Durupınar, F.; Güdükbay, UğurIn this paper, a procedural method for the visualization of knitted and woven fabrics is presented. The proposed method is compatible with a mass-spring model and makes use of the regular warp-weft structure of the cloth. The visualization parameters for the loops and threads are easily mapped to the animated mass-spring model. The simulation idea underlying both knitted and woven fabrics is similar as we represent both structures in 3D. As the proposed method is simple and practical, we can achieve near real-time rendering performance with good visual quality. © 2007 Elsevier Ltd. All rights reserved.Item Open Access Regional model-based computerized ionospheric tomography using GPS measurements: IONOLAB-CIT(Wiley-Blackwell Publishing, Inc., 2015) Tuna, H.; Arıkan, Orhan; Arikan, F.Three-dimensional imaging of the electron density distribution in the ionosphere is a crucial task for investigating the ionospheric effects. Dual-frequency Global Positioning System (GPS) satellite signals can be used to estimate the slant total electron content (STEC) along the propagation path between a GPS satellite and ground-based receiver station. However, the estimated GPS-STEC is very sparse and highly nonuniformly distributed for obtaining reliable 3-D electron density distributions derived from the measurements alone. Standard tomographic reconstruction techniques are not accurate or reliable enough to represent the full complexity of variable ionosphere. On the other hand, model-based electron density distributions are produced according to the general trends of ionosphere, and these distributions do not agree with measurements, especially for geomagnetically active hours. In this study, a regional 3-D electron density distribution reconstruction method, namely, IONOLAB-CIT, is proposed to assimilate GPS-STEC into physical ionospheric models. The proposed method is based on an iterative optimization framework that tracks the deviations from the ionospheric model in terms of F2 layer critical frequency and maximum ionization height resulting from the comparison of International Reference Ionosphere extended to Plasmasphere (IRI-Plas) model-generated STEC and GPS-STEC. The suggested tomography algorithm is applied successfully for the reconstruction of electron density profiles over Turkey, during quiet and disturbed hours of ionosphere using Turkish National Permanent GPS Network.Item Open Access Scalar diffraction field calculation from curved surfaces via Gaussian beam decomposition(Optical Society of America, 2012-06-29) Şahin, E.; Onural, L.We introduce a local signal decomposition method for the analysis of three-dimensional (3D) diffraction fields involving curved surfaces. We decompose a given field on a two-dimensional curved surface into a sum of properly shifted and modulated Gaussian-shaped elementary signals. Then we write the 3D diffraction field as a sum of Gaussian beams, each of which corresponds to a modulated Gaussian window function on the curved surface. The Gaussian beams are propagated according to a derived approximate expression that is based on the Rayleigh-Sommerfeld diffraction model. We assume that the given curved surface is smooth enough that the Gaussian window functions on it can be treated as written on planar patches. For the surfaces that satisfy this assumption, the simulation results show that the proposed method produces quite accurate 3D field solutions.Item Open Access Stereoscopic urban visualization based on graphics processor unit(S P I E - International Society for Optical Engineering, 2008-09) Yilmaz, T.; Güdükbay, UğurWe propose a framework for the stereoscopic visualization of urban environments. The framework uses occlusion and view-frustum culling (VFC) and utilizes graphics hardware to speed up the rendering process. The occlusion culling is based on a slice-wise storage scheme that represents buildings using axis-aligned slices. This provides a fast and a low-cost way to access the visible parts of the buildings. View-frustum culling for stereoscopic visualization is carried out once for both eyes by applying a transformation to the culling location. Rendering using graphics hardware is based on the slice-wise building representation. The representation facilitates fast access to data that are pushed into the graphics procesing unit (GPU) buffers. We present algorithms to access this GPU data. The stereoscopic visualization uses off-axis projection, which we found more suitable for the case of urban visualization. The framework is tested on large urban models containing 7.8 million and 23 million polygons. Performance experiments show that real-time stereoscopic visualization can be achieved for large models. © 2008 Society of Photo-Optical Instrumentation Engineers.Item Open Access A virtual garment design and simulation system(IEEE, 2007-07) Durupınar, Funda; Güdükbay, UğurIn this paper, a 3D graphics environment for virtual garment design and simulation is presented. The proposed system enables the three dimensional construction of a garment from its cloth panels, for which the underlying structure is a mass-spring model. The garment construction process is performed through automatic pattern generation, posterior correction, and seaming. Afterwards, it is possible to do fitting on virtual mannequins as if in a real life tailor's workshop. The system provides the users with the flexibility to design their own garment patterns and make changes on the garment even after the dressing of the model. Furthermore, rendering alternatives for the visualization of knitted and woven fabric are presented. © 2007 IEEE.