dc.contributor.advisor | Can, Fazlı | |
dc.contributor.author | Ercan, Gönenç | |
dc.date.accessioned | 2016-07-01T11:10:12Z | |
dc.date.available | 2016-07-01T11:10:12Z | |
dc.date.issued | 2012 | |
dc.identifier.uri | http://hdl.handle.net/11693/29994 | |
dc.description | Cataloged from PDF version of article. | en_US |
dc.description.abstract | When we express some idea or story, it is inevitable to use words that are semantically
related to each other. When this phenomena is exploited from the aspect
of words in the language, it is possible to infer the level of semantic relationship
between words by observing their distribution and use in discourse. From the
aspect of discourse it is possible to model the structure of the document by observing
the changes in the lexical cohesion in order to attack high level natural
language processing tasks. In this research lexical cohesion is investigated from
both of these aspects by first building methods for measuring semantic relatedness
of word pairs and then using these methods in the tasks of topic segmentation,
summarization and keyphrase extraction.
Measuring semantic relatedness of words requires prior knowledge about the
words. Two different knowledge-bases are investigated in this research. The
first knowledge base is a manually built network of semantic relationships, while
the second relies on the distributional patterns in raw text corpora. In order to
discover which method is effective in lexical cohesion analysis, a comprehensive
comparison of state-of-the art methods in semantic relatedness is made.
For topic segmentation different methods using some form of lexical cohesion
are present in the literature. While some of these confine the relationships only
to word repetition or strong semantic relationships like synonymy, no other work
uses the semantic relatedness measures that can be calculated for any two word
pairs in the vocabulary. Our experiments suggest that topic segmentation performance
improves methods using both classical relationships and word repetition.
Furthermore, the experiments compare the performance of different semantic relatedness
methods in a high level task. The detected topic segments are used in summarization, and achieves better results compared to a lexical chains based
method that uses WordNet.
Finally, the use of lexical cohesion analysis in keyphrase extraction is investigated.
Previous research shows that keyphrases are useful tools in document
retrieval and navigation. While these point to a relation between keyphrases and
document retrieval performance, no other work uses this relationship to identify
keyphrases of a given document. We aim to establish a link between the problems
of query performance prediction (QPP) and keyphrase extraction. To this end,
features used in QPP are evaluated in keyphrase extraction using a Naive Bayes
classifier. Our experiments indicate that these features improve the effectiveness
of keyphrase extraction in documents of different length. More importantly,
commonly used features of frequency and first position in text perform poorly
on shorter documents, whereas QPP features are more robust and achieve better
results. | en_US |
dc.description.statementofresponsibility | Ercan, Gönenç | en_US |
dc.format.extent | xviii, 151 leaves | en_US |
dc.language.iso | English | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Lexical Cohesion | en_US |
dc.subject | Semantic Relatedness | en_US |
dc.subject | Topic Segmentation | en_US |
dc.subject | Summarization | en_US |
dc.subject | Keyphrase Extraction | en_US |
dc.subject.lcc | QA76.9.T48 E73 2012 | en_US |
dc.subject.lcsh | Text processing (Computer science) | en_US |
dc.title | Lexical cohesion analysis for topic segmentation, summarization and keyphrase extraction | en_US |
dc.type | Thesis | en_US |
dc.department | Department of Computer Engineering | en_US |
dc.publisher | Bilkent University | en_US |
dc.description.degree | Ph.D. | en_US |
dc.identifier.itemid | B134797 | |