A statistical information extraction system for Turkish

Date

2000

Editor(s)

Advisor

Oflazer, Kemal

Supervisor

Co-Advisor

Co-Supervisor

Instructor

Source Title

Print ISSN

Electronic ISSN

Publisher

Bilkent University

Volume

Issue

Pages

Language

English

Journal Title

Journal ISSN

Volume Title

Series

Abstract

This thesis presents the results of a study on information extraction from unrestricted Turkish text using statistical language processing methods. We have successfully applied statistical methods using both the lexical and morphological information to the following tasks: -The Turkish Text Deasciifier task aims to convert the ASCII characters in a Turkish text, into the corresponding non-ASCII Turkish characters (i.e.,"ü", "ö", "ç", "ş", "ğ", "ı", and their upper cases). -The Word Segmentation task aims to detect word boundaries, given we have a sequence of characters without space or punctuation.-The Vowel Restoration task aims to restore the vowels of an input stream, whose vowels are deleted.-The Sentence Segmentation task aims to divide a stream of text or speech into grammatical sentences. Given a sequence of (written or spoken) words, the aim of sentence segmentation is to find the boundaries of the sentences.-The Topic Segmentation task aims to divide a stream of text or speech into topically homogeneous blocks. Given a sequence of (written or spoken) words, the aim of topic segmentation is to find the boundaries where topics change.-The Name Tagging task aims to mark the games (persons, locations, and organizations) in a text. For relatively simpler tasks, such as Turkish Text Deasciifier, Word Segmentation, and Vowel Restoration, only lexical information is enough, but in order to obtain better performance in more complex tasks, such as Sentence Segmentation, Topic Segmentation, and Name Tagging, we not only use lexical information, but also exploit morphological and contextual information. For sentence segmentation, we have modeled the final inflectional groups of the words and combined them with the lexical model, and decreased the error rate to 4.34%. For name tagging, in addition to the lexical and morphological models, we have also employed contextual and tag models, and reached an F-measure of 91.56%. For topic segmentation, stems of the words (nouns) have been found to be more effective than using the surface forms of the words and we have achieved 10.90% segmentation error rate on our test set.

Course

Other identifiers

Book Title

Citation

item.page.isversionof