Parallel sequence mining on distributed- memory systems

Date
2001
Editor(s)
Advisor
Gürsoy, Atilla
Supervisor
Co-Advisor
Co-Supervisor
Instructor
Source Title
Print ISSN
Electronic ISSN
Publisher
Bilkent University
Volume
Issue
Pages
Language
English
Journal Title
Journal ISSN
Volume Title
Series
Abstract

Discovering all the frequent sequences in very large databases is a time consuming task. However, large databases forces to partition the original database into chunks of data to process in main-memory. Most current algorithms require as many database scans as the longest frequent sequences. Spade is a fast algorithm which reduces the number of database scans to three by using lattice-theoretic approach to decompose origional problem into small pieces(equivalence classes) which can be processed in main-memory independently. In this thesis work, we present dSpade, a parallel algorithm, based on Spade, for discov- ering the set of all frequent sequences, targeting distributed-memory systems. In dSpade, horizontal database partitioning method is used, where each processor stores equal number of customer transactions. dSpade is a synchronous algorithm for discovering frequent 1-sequences (F1) and frequent 2-sequences ( F2). Each processor performs the same computation on its local data to get local support counts and broadcasts the results to other processors to nd global frequent sequences during F1 and F2 computation. After discovering all F1 and F2, all frequent sequences are inserted into lattice to decompose the original problem into equivalence classes. Equivalence classes are mapped in a greedy heuristic to the least loaded processors in a roundrobin manner. Finally, each processor asynchronously begins to compute Fk on its mapped equivalence classes to nd all frequent sequences. We present results of performance experiments conducted on a 32-node Beowulf Cluster. Experiments show that dSpade delivers good speedup and scales linearly in the database size.

Course
Other identifiers
Book Title
Citation
Published Version (Please cite this version)