Browsing by Subject "Real data sets"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Open Access An approximation algorithm for computing the visibility region of a point on a terrain and visibility testing(IEEE, 2014-01) Alipour, S.; Ghodsi, M.; Güdükbay, Uğur; Golkari, M.Given a terrain and a query point p on or above it, we want to count the number of triangles of terrain that are visible from p. We present an approximation algorithm to solve this problem. We implement the algorithm and then we run it on the real data sets. The experimental results show that our approximation solution is very close to the real solution and compare to the other similar works, the running time of our algorithm is better than their algorithm. The analysis of time complexity of algorithm is also presented. Also, we consider visibility testing problem, where the goal is to test whether p and a given triangle of train are visible or not. We propose an algorithm for this problem and show that the average running time of this algorithm will be the same as running time of the case where we want to test the visibility between two query point p and q.Item Open Access Dipole source reconstruction of brain signals by using particle swarm optimization(IEEE, 2009) Alp, Yaşar Kemal; Arıkan, Orhan; Karakaş, S.Resolving the sources of neural activity is of prime importance in the analysis of Event Related Potentials (ERP). These sources can be modeled as effective dipoles. Identifying the dipole parameters from the measured multichannel data is called the EEG inverse problem. In this work, we propose a new method for the solution of EEG inverse problem. Our method uses Particle Swarm Optimization (PSO) technique for optimally choosing the dipole parameters. Simulations on synthetic data sets show that our method well localizes the dipoles into their actual locations. In the real data sets, since the actual dipole parameters aren't known, the fit error between the measured data and the reconstructed data is minimized. It has been observed that our method reduces this error to the noise level by localizing only a few dipoles in the brain.Item Open Access Preventing unauthorized data flows(Springer, Cham, 2017) Uzun, Emre; Parlato, G.; Atluri, V.; Ferrara, A. L.; Vaidya, J.; Sural, S.; Lorenzi, D.Trojan Horse attacks can lead to unauthorized data flows and can cause either a confidentiality violation or an integrity violation. Existing solutions to address this problem employ analysis techniques that keep track of all subject accesses to objects, and hence can be expensive. In this paper we show that for an unauthorized flow to exist in an access control matrix, a flow of length one must exist. Thus, to eliminate unauthorized flows, it is sufficient to remove all one-step flows, thereby avoiding the need for expensive transitive closure computations. This new insight allows us to develop an efficient methodology to identify and prevent all unauthorized flows leading to confidentiality and integrity violations. We develop separate solutions for two different environments that occur in real life, and experimentally validate the efficiency and restrictiveness of the proposed approaches using real data sets. © IFIP International Federation for Information Processing 2017.Item Open Access Selective replicated declustering for arbitrary queries(Springer, 2009-08) Oktay, K. Yasin; Türk, Ata; Aykanat, CevdetData declustering is used to minimize query response times in data intensive applications. In this technique, query retrieval process is parallelized by distributing the data among several disks and it is useful in applications such as geographic information systems that access huge amounts of data. Declustering with replication is an extension of declustering with possible data replicas in the system. Many replicated declustering schemes have been proposed. Most of these schemes generate two or more copies of all data items. However, some applications have very large data sizes and even having two copies of all data items may not be feasible. In such systems selective replication is a necessity. Furthermore, existing replication schemes are not designed to utilize query distribution information if such information is available. In this study we propose a replicated declustering scheme that decides both on the data items to be replicated and the assignment of all data items to disks when there is limited replication capacity. We make use of available query information in order to decide replication and partitioning of the data and try to optimize aggregate parallel response time. We propose and implement a Fiduccia-Mattheyses-like iterative improvement algorithm to obtain a two-way replicated declustering and use this algorithm in a recursive framework to generate a multi-way replicated declustering. Experiments conducted with arbitrary queries on real datasets show that, especially for low replication constraints, the proposed scheme yields better performance results compared to existing replicated declustering schemes. © 2009 Springer.