Browsing by Subject "Three-dimensional display systems."
Now showing 1 - 20 of 20
- Results Per Page
- Sort Options
Item Open Access 3-dimensional median-based algorithms in image sequence processing(1990) Alp, Münire BilgeThis thesis introduces new 3-dimensional median-based algorithms to be used in two of the main research areas in image sequence proc(',ssi,ng; image sequence enhancement and image sequence coding. Two new nonlinear filters are developed in the field of image sequence enhancement. The motion performances and the output statistics of these filters are evaluated. The simulations show that the filters improve the image quality to a large extent compared to other examples from the literature. The second field addressed is image sequence coding. A new 3-dimensional median-based coding and decoding method is developed for stationary images with the aim of good slow motion performance. All the algorithms developed are simulated on real image sequences using a video sequencer.Item Open Access 3D mesh animation system targeted for multi-touch environments(2009) Ceylan, DuyguFast developments in computer technology have given rise to different application areas such as multimedia, computer games, and Virtual Reality. All these application areas are based on animation of 3D models of real world objects. For this purpose, many tools have been developed to enable computer modeling and animation. Yet, most of these tools require a certain amount of experience about geometric modeling and animation principles, which creates a handicap for inexperienced users. This thesis introduces a solution to this problem by presenting a mesh animation system targeted specially for novice users. The main approach is based on one of the fundamental model representation concepts, Laplacian framework, which is successfully used in model editing applications. The solution presented perceives a model as a combination of smaller salient parts and uses the Laplacian framework to allow these parts to be manipulated simultaneously to produce a sense of movement. The interaction techniques developed enable users to carry manipulation and global transformation actions at the same time to create more pleasing results. Furthermore, the approach utilizes the multi-touch screen technology and direct manipulation principles to increase the usability of the system. The methods described are experimented by creating simple animations with several 3D models; which demonstrates the advantages of the proposed solution.Item Open Access Animated mesh simplification based on saliency metrics(2008) Tolgay, AhmetMesh saliency identifies the visually important parts of a mesh. Mesh simplification algorithms using mesh saliency as simplification criterion preserve the salient features of a static 3D model. In this thesis, we propose a saliency measure that will be used to simplify animated 3D models. This saliency measure uses the acceleration and deceleration information about a dynamic 3D mesh in addition to the saliency information for static meshes. This provides the preservation of sharp features and visually important cues during animation. Since oscillating motions are also important in determining saliency, we propose a technique to detect oscillating motions and incorporate it into the saliency based animated model simplification algorithm. The proposed technique is experimented on animated models making oscillating motions and promising visual results are obtained.Item Open Access Calculation of scalar optical diffraction field from its distributed samples over the space(2010) Esmer, Gökhan BoraAs a three-dimensional viewing technique, holography provides successful threedimensional perceptions. The technique is based on duplication of the information carrying optical waves which come from an object. Therefore, calculation of the diffraction field due to the object is an important process in digital holography. To have the exact reconstruction of the object, the exact diffraction field created by the object has to be calculated. In the literature, one of the commonly used approach in calculation of the diffraction field due to an object is to superpose the fields created by the elementary building blocks of the object; such procedures may be called as the “source model” approach and such a computed field can be different from the exact field over the entire space. In this work, we propose four algorithms to calculate the exact diffraction field due to an object. These proposed algorithms may be called as the “field model” approach. In the first algorithm, the diffraction field given over the manifold, which defines the surface of the object, is decomposed onto a function set derived from propagating plane waves. Second algorithm is based on pseudo inversion of the systemmatrix which gives the relation between the given field samples and the field over a transversal plane. Third and fourth algorithms are iterative methods. In the third algorithm, diffraction field is calculated by a projection method onto convex sets. In the fourth algorithm, pseudo inversion of the system matrix is computed by conjugate gradient method. Depending on the number and the locations of the given samples, the proposed algorithms provide the exact field solution over the entire space. To compute the exact field, the number of given samples has to be larger than the number of plane waves that forms the diffraction field over the entire space. The solution is affected by the dependencies between the given samples. To decrease the dependencies between the given samples, the samples over the manifold may be taken randomly. Iterative algorithms outperforms the rest of them in terms of computational complexity when the number of given samples are larger than 1.4 times the number of plane waves forming the diffraction field over the entire space.Item Open Access Camera-based 3D interaction for handheld devices(2010) Pekin, Tacettin SercanUsing handheld devices is a very important part of our daily life. Interacting with them is the most unavoidable part of using them. Today’s user interface designs are mostly adapted from desktop computers. The result of this was difficulties of using handheld devices. However processing power, new sensing technologies and cameras are already available for mobile devices. This gives us the possibility to develop systems to communicate through different modalities. This thesis proposes some novel approaches, including finger detection, finger tracking and object motion analysis, to allow efficient interaction with mobile devices. As the result of my thesis, a new interface between users and mobile devices is created. This is a new way of interaction with the mobile device. It enables direct manipulation on objects. The technique does not require any extra hardware. The interaction method, maps an object’s motion (such as a finger’s or a predefined marker’s motion) to a virtual space to achieve manipulation which is moving in front of the camera. For Finger Detection, a new method is created based on the usage of the mobile devices and structure of thumb. A fast two dimensional color-based scene analysis method is applied to solve the problem. For Finger Tracking, a new method is created based on the movement ergonomics of thumb when holding the mobile device on hand. Extracting the three dimensional movement from the two dimensional RGB data is an important part of this section of the study. A new 3D pointer data and pointer image is created for usage with 3D input and 3D interaction of 3D scenes. Also direct manipulation for low cost is achieved.Item Open Access Dual-finger 3D interaction techniques for mobile devices(2012) Telkenaroğlu, CanThree-dimensional capabilities on mobile devices are increasing, and interactivity is becoming a key feature of these tools. It is expected that users will actively engage with the 3D content, instead of being passive consumers. Because touchscreens provide a direct means of interaction with 3D content by directly touching and manipulating 3D graphical elements, touch-based interaction is a natural and appealing style of input for 3D applications. However, developing 3D interaction techniques for handheld devices using touch-screens is not a straightforward task. One issue is that when interacting with 3D objects, users occlude the object with their fingers. Furthermore, because the user’s finger covers a large area of the screen, the smallest size of the object users can touch is limited. In this thesis, we first inspect existing 3D interaction techniques based on their performance with handheld devices. Then, we present a set of precise Dual-Finger 3D Interaction Techniques for a small display. Then, we present the results of an experimental study, where we evaluate the usability, performance, and error rate of the proposed and existing 3D interaction techniques. Finally, we integrate the proposed methods of different user modes.Item Open Access Example based retargeting human motion to arbitrary mesh models(2013) Yaz, İlker O.Animation of mesh models can be accomplished in many ways, including character animation with skinned skeletons, deformable models, or physic-based simulation. Generating animations with all of these techniques is time consuming and laborious for novice users; however adapting already available wide-range human motion capture data might simplify the process signi cantly. This thesis presents a method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion retargeting systems try to preserve original motion as is, while satisfying several motion constraints. In our approach, we use a few pose-to-pose examples provided by the user to extract desired semantics behind retargeting process by not limiting the transfer to be only literal. Hence, mesh models, which have di erent structures and/or motion semantics from humanoid skeleton, become possible targets. Also considering mesh models which are widely available and without any additional structure (e.g. skeleton), our method does not require such a structure by providing a build-in surface-based deformation system. Since deformation for animation purpose can require more than rigid behaviour, we augment existing rigid deformation approaches to provide volume preserving and cartoon-like deformation. For demonstrating results of our approach, we retarget several motion capture data to three well-known models, and also investigate how automatic retargeting methods developed considering humanoid models work on our models.Item Open Access Improving the resolution of diffraction patterns from many low resolution recordings(2010) Yücesoy, VeyselHolography attempts to record and reconstruct wave fields. The resolution limitation of the recording equipments causes some problems in the reconstruction process. An automatic method for the registration and stitching of low resolution diffraction patterns to form a higher resolution one is proposed. There is no prior knowledge about the 3D position of the object in the recordings and it is assumed that there is only one particle in the object field. The method uses Wigner transform, Canny edge detection and Hough transform to register the patterns, and some additional iterative methods depending on the local variance of the reconstructed patterns to stitch them. The performance of the overall system is evaluated against object radius, noise in the original pattern, recording noise and presence of multiple particles in the object field by computer simulations.Item Open Access Local signal decomposition based methods for the calculation of three-dimensional scalar optical diffraction field due to a field given on a curved surface(2013) Şahin, ErdemA three-dimensional scene or object can be optically replicated via the threedimensional imaging and display method holography. In computer-generated holography, the scalar diffraction field due to a field given on an object (curved surface) is calculated numerically. The source model approaches treat the building elements of the object (such as points or planar polygons) independently to simplify the calculation of diffraction field. However, as a tradeoff, the accuracies of fields calculated by such methods are degraded. On the other hand, field models provide exact field solutions but their computational complexities make their application impractical for meaningful sizes of surfaces. By using the practical setup of the integral imaging, we establish a space-frequency signal decomposition based relation between the ray optics (more specifically the light field representation) and the scalar wave optics. Then, by employing the uncertainty principle inherent to this space-frequency decomposition, we derive an upper bound for the joint spatial and angular (spectral) resolution of a physically realizable light field representation. We mainly propose two methods for the problem of three-dimensional diffraction field calculation from fields given on curved surfaces. In the first approach, we apply linear space-frequency signal decomposition methods to the two-dimensional field given on the curved surface and decompose it into a sum of local elementary functions. Then, we write the diffraction field as a sum of local beams each of which corresponds to such an elementary function on the curved surface. By this way, we increase the accuracy provided by the source models while keeping the computational complexity at comparable levels. In the second approach, we firstly decompose the three-dimensional field into a sum of local beams, and then, we construct a linear system of equations where we form the system matrix by calculating the field patterns that the three-dimensional beams produce on the curved surface. We find the coefficients of the beams by solving the linear system of equations and thus specify the three-dimensional field. Since we use local beams in threedimensional field decomposition, we end up with sparse system matrices. Hence, by taking advantage of this sparsity, we achieve considerable reduction in computational complexity and memory requirement compared to other field model approaches that use global signal decompositions. The local Gaussian beams used in both approaches actually correspond to physically realizable light rays. Indeed, the upper joint resolution bound that we derive is obtained by such Gaussian beams.Item Open Access Modeling and animation of brittle fracture in three dimensions(2007) Küçükyılmaz, AyşeThis thesis describes a system for simulating fracture in brittle objects. The system combines rigid body simulation methods with a constraint-based model to animate fracturing of arbitrary polyhedral shaped objects under impact. The objects are represented as sets of masses, where pairs of adjacent masses are connected by a distance-preserving linear constraint. The movement of the objects is normally realized by unconstrained rigid body dynamics. The fracture calculations are only done at discrete collision events. In case of an impact, the forces acting on the constraints are calculated. These forces determine how and where the object will break. The problem with most of the existing fracture systems is that they only allow simulations to be done offline, either because the utilized techniques are computationally expensive or they require many small steps for accuracy. This work presents a near-real-time solution to the problem of brittle fracture and a graphical user interface to create realistic animations.Item Open Access Perceptually driven stereoscopic camera control in 3D virtual environments(2013) Kevinç, Elif BengüDepth notion and how to perceive depth have long been studied in the eld of psychology, physiology, and even art. Human visual perception enables to perceive spatial layout of the outside world by using visual depth cues. Binocular disparity among these depth cues, is based on the separation between two di erent views that are observed by two eyes. Disparity concept constitutes the base of the construction of the stereoscopic vision. Emerging technologies try to replicate binocular disparity principles in order to provide 3D illusion and stereoscopic vision. However, the complexity of applying the underlying principles of 3D perception, confronted researchers the problem of wrongly produced stereoscopic contents. It is still a great challenge to give realistic but also comfortable 3D experience. In this work, we present a camera control mechanism: a novel approach for disparity control and a model for path generation. We try to address the challenges of stereoscopic 3D production by presenting comfortable viewing experience to users. Therefore, our disparity system approaches the accommodation/convergence con- ict problem, which is the most known issue that causes visual fatigue in stereo systems, by taking objects' importance into consideration. Stereo camera parameters are calculated automatically with an optimization process. In the second part of our control mechanism, the camera path is constructed for a given 3D environment and scene elements. Moving around important regions of objects is a desired scene exploration task. In this respect, object saliencies are used for viewpoint selection around scene elements. Path structure is generated by using linked B ezier curves which assures to pass through pre-determined viewpoints. Though there is considerable amount of research found in the eld of stereo creation, we believe that approaching this problem from scene content aspect provides a uniquely promising experience. We validate our assumption with user studies in which our method and existing two other disparity control models are compared. The study results show that our method shows superior results in quality, depth, and comfort.Item Open Access Real time physics-based augmented fitting room using time-of-flight cameras(2013) Gültepe, UmutThis thesis proposes a framework for a real-time physically-based augmented cloth tting environment. The required 3D meshes for the human avatar and apparels are modeled with speci c constraints. The models are then animated in real-time using input from a user tracked by a depth sensor. A set of motion lters are introduced in order to improve the quality of the simulation. The physical e ects such as inertia, external and forces and collision are imposed on the apparel meshes. The avatar and the apparels can be customized according to the user. The system runs in real-time on a high-end consumer PC with realistic rendering results.Item Open Access Signal processing based solutions for holographic displays that use binary spatial light modulators(2012) Ulusoy, ErdemHolography is a promising method to realize satisfactory quality threedimensional (3D) video displays. Spatial light modulators (SLM) are used in holographic video displays. Usually SLMs with higher dynamic ranges are preferred. But currently existing multilevel SLMs have important drawbacks. Some of the associated problems can be avoided by using binary SLMs, if their low dynamic range is compensated for by using appropriate signal processing techniques. In the first solution, the complex-valued gray level SLM patterns that synthesize light fields specified in the non-far-field range are halftoned into binary SLM patterns by solving two decoupled real-valued constrained halftoning problems. As the synthesis region, a sufficiently small sub-region of the central diffraction order region of the SLM is chosen such that the halftoning error is acceptable. The light fields are synthesized merely after free space propagation from the SLM plane and no other complicated optical setups are needed. In this respect, the theory of halftoning for ordinary real-valued gray scale images is extended to complex-valued holograms. Simulation results indicate that light fields that are given either on a plane or within a volume can be successfully synthesized by our approach. In the second solution, a new full complex-valued combined SLM is effectively created by forming a properly weighted superposition of a number of binary SLMs where the superposition weights can be complex-valued. The method is a generalization of the well known concepts of bit plane decomposition and representation for ordinary images and actually involves a trade-off between dynamic range and pixel count. The coverage of the complex plane by the complex values that can be generated is much more satisfactory than that is achieved by those methods available in the literature. The design is also easy to customize for any operation wavelength. As a result, we show that binary SLMs, with their robust nature, can be used for holographic video display designsItem Open Access Surface reflectance estimation from spatio-temporal subband statistics of moving object videos(2012) Külçe, OnurImage motion can convey a broad range of object properties including 3D structure (structure from motion, SfM), animacy (biological motion), and its material. Our understanding of how the visual system may estimate complex properties such as surface reflectance or object rigidity from image motion is still limited. In order to reveal the neural mechanisms underlying surface material understanding, a natural point to begin with is to study the output of filters that mimic response properties of low level visual neurons to different classes of moving textures, such as patches of shiny and matte surfaces. To this end we designed spatio-temporal bandpass filters whose frequency response is the second order derivative of the Gaussian function. Those filters are generated towards eight orientations in three scales in the frequency domain. We computed responses of these filters to dynamic specular and matte textures. Specifically, we assessed the statistics of the resultant filter output histograms and calculated the mean, standard deviation, skewness and kurtosis of those histograms. We found that there were substantial differences in standard deviation and skewness of specular and matte texture subband histograms. To formally test whether these simple measurements can in fact predict surface material from image motion we developed a computer-assisted classifier based on these statistics. The results of the classification showed that, 75% of all movies are classified correctly, where the correct classification rate of shiny object movies is around 77% and the correct classification rate of matte object movies is around 71%. Next, we synthesized dynamic textures which resembled the subband statistics of videos of moving shiny and matte objects. Interestingly the appearance of these synthesized textures were neither shiny nor matte. Taken together our results indicate that there are differences in the spatio-temporal subband statistics of image motion generated by rotating matte and specular objects. While these differences may be utilized by the human brain during the perceptual process, our results on the synthesized textures suggest that the statistics may not be sufficient to judge the material qualities of an object.Item Open Access Task-based automatic camera placement(2010) Kabak, MustafaPlacing cameras to view an animation that takes place in a virtual 3D environment is a di cult task. Correctly placing an object in space and orienting it, and furthermore, animating it to follow the action in the scene is an activity that requires considerable expertise. Approaches to automating this activity to various degrees have been proposed in the literature. Some of these approaches have constricted assumptions about the nature of the animation and the scene they visualize, therefore they can be used only under limited conditions. While some approaches require a lot of attention from the user, others fail to give the user su cient means to a ect the camera placement. We propose a novel abstraction called Task for implementing camera placement functionality. Tasks strike a balance between ease of use and ability to control the output by enabling users to easily guide camera placement without dealing with low-level geometric constructs. Users can utilize tasks to control camera placement in terms of high-level, understandable notions like objects, their relations, and impressions on viewers while designing video presentations of 3D animations. Our framework of camera placement automation reconciles the demands brought by di erent tasks, and provides tasks with common low-level geometric foundations. The exibility and extensibility of the framework facilitates its use with diverse 3D scenes and visual variety in its output.Item Open Access Three-dimensional holographic video display systems using multiple spatial light modulators(2011) Yaraş, FahriSpatial light modulators (SLMs) are commonly used in electro-holographic display systems. Liquid crystal on silicon, liquid crystal, mirror-based, acousto-optic and optically addressed devices are some of the SLM types. Most of the SLMs are digitally driven and pixelated; therefore, they are easy to use. We use phase-only SLMs in our experiments. Resolution and size of currently available SLMs are inadequate for satisfactory holographic reconstructions. Space-bandwidth product (SBP) is a good metric for the quality assessments. High SBP is needed when lateral or rotational motion is allowed for the observer. In our experiments 2D images whose sizes are even larger than the SLM size are reconstructed using single SLM holographic displays. Volume reconstructions are also obtained by using such displays. Either LED or laser illumination is used in our experiments. After the experiments with the single SLM holographic displays, some laboratory prototypes of multiple SLM holographic systems are designed and implemented. In a real-time color holographic display system, three SLMs are used for red, blue and green channels. GPU acceleration is also used to achieve video rates. Beam-splitters and micro-stages are used for the alignments in all multiple SLM designs. In another multiple SLM configuration, SLMs are tiled side by side to form a three by two matrix to increase both vertical and horizontal field of view. Larger field of view gives flexibility to the observer to move and rotate around the reconstructed images of objects. To further increase the field of view, SLMs are tiled in a circular configuration. A single large beamsplitter is used to tile the SLMs side by side without any gap. A cone mirror is used to direct incoming light toward all SLMs. Compared to planar configuration, circularly configured multiple SLMs increase the field of view, significantly. With the help of such configurations holographic videos of ghost-like 3D objects can be observed binocularly. Experimental results are satisfactory.Item Open Access Three-dimensional integral imaging based capture and display system using digital programmable Fresnel lenslet arrays(2012) Yöntem, Ali ÖzgürA Fresnel lenslet array pattern is written on a phase-only LCoS spatial light modulator device (SLM) to replace the regular analog lenslet array in a conventional integral imaging system. We theoretically analyze the capture part of the proposed system based on Fresnel wave propagation formulation. Due to pixelation and quantization of the lenslet array pattern, higher diffraction orders and multiple focal points emerge. Because of the multiple focal planes introduced by the discrete lenslets, multiple image planes are observed. The use of discrete lenslet arrays also causes some other artefacts on the recorded elemental images. The results reduce to those available in the literature when the effects introduced by the discrete nature of the lenslets are omitted. We performed simulations of the capture part. It is possible to obtain the elemental images with an acceptable visual quality. We also constructed an optical integral imaging system with both capture and display parts using the proposed discrete Fresnel lenslet array written on a SLM. Optical results, when self-luminous objects, such as an LED array, are used indicate that the proposed system yields satisfactory results. The resulting system consisting of digital lenslet arrays offers a flexible integral imaging system. Thus, to increase the visual performance of the system, previously available analog solutions can now be implemented digitally by using electro-optical devices. We also propose a method and present applications of this method that converts a diffraction pattern into an elemental image set in order to display them on a display-only integral imaging setup. We generate elemental images based on diffraction calculations as an alternative to commonly used ray tracing methods. Ray tracing methods do not accommodate the interference and diffraction phenomena. Our proposed method enables us to obtain elemental images from a holographic recording of a 3D object/scene. The diffraction pattern can be either numerically generated or digitally acquired from optical input. The method shows the connection between a hologram (diffraction pattern) of a 3D object and an elemental image set of the same 3D object. We obtained optical reconstructions with a display-only integral imaging setup where we used a digital lenslet array. We also obtained numerical reconstructions, again by using the diffraction calculations, for comparison. The digital and optical reconstruction results are in good agreement. Finally, we showed a method to obtain an orthoscopic image of a 3D object. We converted an elemental image set that gives real pseudoscopic reconstruction into another elemental image set that gives real orthoscopic reconstruction. Again, we used wave propagation simulations for this purpose. We also demonstrated numerical and optical reconstructions from the obtained elemental image sets for comparison. The results are satisfactory given the physical limitations of the display system.Item Open Access A three-dimensional nonlinear finite element method implementation toward surgery simulation(2011) Gülümser, EmirFinite Element Method (FEM) is a widely used numerical technique for finding approximate solutions to the complex problems of engineering and mathematical physics that cannot be solved with analytical methods. In most of the applications that require simulation to be fast, linear FEM is widely used. Linear FEM works with a high degree of accuracy with small deformations. However, linear FEM fails in accuracy when large deformations are used. Therefore, nonlinear FEM is the suitable method for crucial applications like surgical simulators. In this thesis, we propose a new formulation and finite element solution to the nonlinear 3D elasticity theory. Nonlinear stiffness matrices are constructed by using the Green-Lagrange strains (large deformation), which are derived directly from the infinitesimal strains (small deformation) by adding the nonlinear terms that are discarded in infinitesimal strain theory. The proposed solution is a more comprehensible nonlinear FEM for those who have knowledge about linear FEM since the proposed method directly derived from the infinitesimal strains. We implemented both linear and nonlinear FEM by using same material properties with the same tetrahedral elements to examine the advantages of nonlinear FEM over the linear FEM. In our experiments, it is shown that nonlinear FEM gives more accurate results when compared to linear FEM when rotations and high external forces are involved. Moreover, the proposed nonlinear solution achieved significant speed-ups for the calculation of stiffness matrices and for the solution of a system as a whole.Item Open Access Three-dimensional video coding on mobile platforms(2009) Bal, CanWith the evolution of the wireless communication technologies and the multimedia capabilities of the mobile phones, it is expected that three-dimensional (3D) video technologies will soon get adapted to the mobile phones. This raises the problem of choosing the best 3D video representation and the most efficient coding method for the selected representation for mobile platforms. Since the latest 2D video coding standard, H.264/MPEG-4 AVC, provides better coding efficiency over its predecessors, coding methods of the most common 3D video representations are based on this standard. Among the most common 3D video representations, there are multi-view video, video plus depth, multi-view video plus depth and layered depth video. For using on mobile platforms, we selected the conventional stereo video (CSV), which is a special case of multi-view video, since it is the simplest among the available representations. To determine the best coding method for CSV, we compared the simulcast coding, multi-view coding (MVC) and mixed-resolution stereoscopic coding (MRSC) without inter-view prediction, with subjective tests using simple coding schemes. From these tests, MVC is found to provide the best visual quality for the testbed we used, but MRSC without inter-view prediction still came out to be promising for some of the test sequences and especially for low bit rates. Then we adapted the Joint Video Team’s reference multi-view decoder to run on ZOOMTM OMAP34xTM Mobile Development Kit (MDK). The first decoding performance tests on the MDK resulted with around four stereo frames per second with frame resolutions of 640×352. To further improve the performance, the decoder software is profiled and the most demanding algorithms are ported to run on the embedded DSP core. Tests resulted with performance gains ranging from 25% to 60% on the DSP core. However, due to the design of the hardware platform and the structure of the reference decoder, the time spent for the communication link between the main processing unit and the DSP core is found to be high, leaving the performance gains insignificant. For this reason, it is concluded that the reference decoder should be restructured to use this communication link as infrequently as possible in order to achieve overall performance gains by using the DSP core.Item Open Access Volumetric rendering techniques for scientific visualization(2014) Okuyan, ErhanDirect volume rendering is widely used in many applications where the inside of a transparent or a partially transparent material should be visualized. We have explored several aspects of the problem. First, we proposed a view-dependent selective refinement scheme in order to reduce the high computational requirements without affecting the image quality significantly. Then, we explored the parallel implementations of direct volume rendering: both on GPU and on multi-core systems. Finally, we used direct volume rendering approaches to create a tool, MaterialVis, to visualize amorphous and/or crystalline materials. Visualization of large volumetric datasets has always been an important problem. Due to the high computational requirements of volume-rendering techniques, achieving interactive rates is a real challenge. We present a selective refinement scheme that dynamically refines the mesh according to the camera parameters. This scheme automatically determines the impact of different parts of the mesh on the output image and refines the mesh accordingly, without needing any user input. The viewdependent refinement scheme uses a progressive mesh representation that is based on an edge collapse-based tetrahedral mesh simplification algorithm. We tested our view-dependent refinement framework on an existing state-of-the-art volume renderer. Thanks to low overhead dynamic view-dependent refinement, we achieve interactive frame rates for rendering common datasets at decent image resolutions. Achieving interactive rates for direct volume rendering of large unstructured volumetric grids is a challenging problem, but parallelizing direct volume rendering algorithms can help achieve this goal. Using Compute Unified Device Architecture (CUDA), we propose a GPU-based volume rendering algorithm that itself is based on a cell projection-based ray-casting algorithm designed for CPU implementations. We also propose a multi-core parallelized version of the cell-projection algorithm using OpenMP. In both algorithms, we favor image quality over rendering speed. Our algorithm has a low memory footprint, allowing us to render large datasets. Our algorithm support progressive rendering. We compared the GPU implementation with the serial and multi-core implementations. We observed significant speed-ups, that, together with progressive rendering, enabling reaching interactive rates for large datasets. Visualization of materials is an indispensable part of their structural analysis. We developed a visualization tool for amorphous as well as crystalline structures, called MaterialVis. Unlike the existing tools, MaterialVis represents material structures as a volume and a surface manifold, in addition to plain atomic coordinates. Both amorphous and crystalline structures exhibit topological features as well as various defects. MaterialVis provides a wide range of functionality to visualize such topological structures and crystal defects interactively. Direct volume rendering techniques are used to visualize the volumetric features of materials, such as crystal defects, which are responsible for the distinct fingerprints of a specific sample. In addition, the tool provides surface visualization to extract hidden topological features within the material. Together with the rich set of parameters and options to control the visualization, MaterialVis allows users to visualize various aspects of materials very efficiently as generated by modern analytical techniques such as the Atom Probe Tomography.