• About
  • Policies
  • What is openaccess
  • Library
  • Contact
Advanced search
      View Item 
      •   BUIR Home
      • Scholarly Publications
      • Graduate Schools
      • Graduate School of Education
      • View Item
      •   BUIR Home
      • Scholarly Publications
      • Graduate Schools
      • Graduate School of Education
      • View Item
      JavaScript is disabled for your browser. Some features of this site may not work without it.

      Can computerized adaptive testing work in students’ admission to higher education programs in Turkey?

      Thumbnail
      View / Download
      1.2 Mb
      Author
      Kalender, I.
      Berberoglu, G.
      Date
      2017-04
      Source Title
      Kuram ve Uygulamada Eğitim Bilimleri
       
      Educational Sciences: Theory & Practice
       
      Print ISSN
      1303-0485
      Publisher
      EDAM
      Volume
      17
      Issue
      2
      Pages
      573 - 596
      Language
      English
      Type
      Article
      Item Usage Stats
      144
      views
      110
      downloads
      Abstract
      Admission into university in Turkey is very competitive and features a number of practical problems regarding not only the test administration process itself, but also concerning the psychometric properties of test scores. Computerized adaptive testing (CAT) is seen as a possible alternative approach to solve these problems. In the first phase of the study, a series of CAT simulations based on real students’ responses to science items were conducted in order to determine which test termination rule produced more comparable results with scores made on the paper and pencil version of the test. An average of 17 items was used to terminate the CAT administration for a reasonable reliability level as opposed to the normal 45 items. Moreover, CAT based science scores not only produced similar correlations when using mathematics subtest scores as an external criterion, but also ranked the students similarly to the paper and pencil test version. In the second phase, a live CAT administration was implemented using an item bank composed of 242 items with a group of students who had previously taken the exam the paper and pencil version of the test. A correlation of .76 was found between the CAT and paper and pencil scores for this group. The results seem to support the CAT version of the subtests as a feasible alternative approach in Turkey’s university admission system.
      Keywords
      Computerized adaptive testing
      Item response theory
      University admission examinations
      Validity
      Classification
      Permalink
      http://hdl.handle.net/11693/48265
      Published Version (Please cite this version)
      http://dx.doi.org/10.12738/ estp.2017.2.0280
      Collections
      • Graduate School of Education 101
      Show full item record

      Browse

      All of BUIRCommunities & CollectionsTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartmentsThis CollectionTitlesAuthorsAdvisorsBy Issue DateKeywordsTypeDepartments

      My Account

      Login

      Statistics

      View Usage StatisticsView Google Analytics Statistics

      Bilkent University

      If you have trouble accessing this page and need to request an alternate format, contact the site administrator. Phone: (312) 290 1771
      Copyright © Bilkent University - Library IT

      Contact Us | Send Feedback | Off-Campus Access | Admin | Privacy