Browsing by Subject "Computerized adaptive testing"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Can computerized adaptive testing work in students’ admission to higher education programs in Turkey?(EDAM, 2017-04) Kalender, I.; Berberoglu, G.Admission into university in Turkey is very competitive and features a number of practical problems regarding not only the test administration process itself, but also concerning the psychometric properties of test scores. Computerized adaptive testing (CAT) is seen as a possible alternative approach to solve these problems. In the first phase of the study, a series of CAT simulations based on real students’ responses to science items were conducted in order to determine which test termination rule produced more comparable results with scores made on the paper and pencil version of the test. An average of 17 items was used to terminate the CAT administration for a reasonable reliability level as opposed to the normal 45 items. Moreover, CAT based science scores not only produced similar correlations when using mathematics subtest scores as an external criterion, but also ranked the students similarly to the paper and pencil test version. In the second phase, a live CAT administration was implemented using an item bank composed of 242 items with a group of students who had previously taken the exam the paper and pencil version of the test. A correlation of .76 was found between the CAT and paper and pencil scores for this group. The results seem to support the CAT version of the subtests as a feasible alternative approach in Turkey’s university admission system.Item Open Access A comparability and classification analysis of computerized adaptive and conventional paper- based versions of an English language proficiency reading subtest(2022-01) Kaya, ElifThe current study compares the computerized adaptive test (CAT) and paper-based test (PBT) versions of an English language proficiency reading subtest in terms of psychometric qualities. The study also investigates classification performance of CATs not designed for classification purposes with reference to its PBT version. Real data-based simulations were conducted under varying test conditions. The results demonstrate that ability levels estimated by CATs and PBT are similar. A relatively larger item reduction can be obtained with 0.50 and 0.40 standard error thresholds and CATs terminated with 20, 25, and 30 items performed well with acceptable SE values. Reliability of CAT ability estimates was comparable and highly correlated with PBT estimates. For classification analysis, classification accuracy (CA) and classification consistency (CC) was also estimated using the Rudner method. Classification analyses were conducted on single and multiple cut-off points. The results showed that the use of a single cut-off score produced better classification performance, particularly for high and low ability groups. On the other hand, the use of multiple cut-off scores simultaneously yielded significantly lower classification performance. Overall, the results highlight the potential for CATs not designed specifically for classification to serve classification purposes and indicate avenues for further research.Item Open Access Computerized adaptive testing for student selection to higher education(Deomed, 2012) Kalender, İlkerThe purpose of the present study is to discuss applicability of computerized adaptive testing format as an alternative for current student selection examinations to higher education in Turkey. In the study, first problems associated with current student selection system are given. These problems exerts pressure on students that results in test anxiety, produce measurement experiences that can be criticized, and lessen credibility of student selection system. Next, computerized adaptive test are introduced and advantages they provide are presented. Then results of a study that used two research designs (simulation and live testing) were presented. Results revealed that (i) computerized adaptive format provided a reduction up to 80% in the number of items given to students compared to paper and pencil format of student selection examination, (ii) ability estimations have high reliabilities. Correlations between ability estimations obtained from simulation and traditional format were higher than 0.80. At the end of the study solutions provided by computerized adaptive testing implementation to the current problems were discussed. Also some issues for application of CAT format for student selection examinations in Turkey are given.