Show simple item record

dc.contributor.authorCambazoglu, B. B.en_US
dc.contributor.authorTurk, A.en_US
dc.contributor.authorAykanat, Cevdeten_US
dc.date.accessioned2016-02-08T10:25:09Z
dc.date.available2016-02-08T10:25:09Zen_US
dc.date.issued2004en_US
dc.identifier.issn0302-9743
dc.identifier.issn1611-3349
dc.identifier.urihttp://hdl.handle.net/11693/24172en_US
dc.description.abstractThe need to quickly locate, gather, and store the vast amount of material in the Web necessitates parallel computing. In this paper, we propose two models, based on multi-constraint graph-partitioning, for efficient data-parallel Web crawling. The models aim to balance the amount of data downloaded and stored by each processor as well as balancing the number of page requests made by the processors. The models also minimize the total volume of communication during the link exchange between the processors. To evaluate the performance of the models, experimental results are presented on a sample Web repository containing around 915,000 pages. © Springer-Verlag 2004.en_US
dc.language.isoEnglishen_US
dc.source.titleLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)en_US
dc.relation.isversionofhttps://doi.org/10.1007/978-3-540-30182-0_80en_US
dc.subjectArtificial intelligenceen_US
dc.subjectComputersen_US
dc.subjectData parallelen_US
dc.subjectMulti-constraintsen_US
dc.subjectWeb Crawlingen_US
dc.subjectWeb repositoriesen_US
dc.subjectParallel processing systemsen_US
dc.titleData-parallel web crawling modelsen_US
dc.typeArticleen_US
dc.departmentDepartment of Computer Engineeringen_US
dc.citation.spage801en_US
dc.citation.epage809en_US
dc.citation.volumeNumber3280en_US
dc.identifier.doi10.1007/978-3-540-30182-0_80en_US
dc.publisherSpringeren_US
dc.contributor.bilkentauthorAykanat, Cevdet


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record