Browsing by Subject "Rudner approach"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access A comparability and classification analysis of computerized adaptive and conventional paper- based versions of an English language proficiency reading subtest(2022-01) Kaya, ElifThe current study compares the computerized adaptive test (CAT) and paper-based test (PBT) versions of an English language proficiency reading subtest in terms of psychometric qualities. The study also investigates classification performance of CATs not designed for classification purposes with reference to its PBT version. Real data-based simulations were conducted under varying test conditions. The results demonstrate that ability levels estimated by CATs and PBT are similar. A relatively larger item reduction can be obtained with 0.50 and 0.40 standard error thresholds and CATs terminated with 20, 25, and 30 items performed well with acceptable SE values. Reliability of CAT ability estimates was comparable and highly correlated with PBT estimates. For classification analysis, classification accuracy (CA) and classification consistency (CC) was also estimated using the Rudner method. Classification analyses were conducted on single and multiple cut-off points. The results showed that the use of a single cut-off score produced better classification performance, particularly for high and low ability groups. On the other hand, the use of multiple cut-off scores simultaneously yielded significantly lower classification performance. Overall, the results highlight the potential for CATs not designed specifically for classification to serve classification purposes and indicate avenues for further research.Item Open Access IRT-based classification analysis of an english language reading proficiency subtest(SAGE, 2022) Kaya, Elif; O’Grady, Stefan; Kalender, İlkerLanguage proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive testing (CAT). Using real data simulations, the current study investigated the classification performance of CAT on the reading section of an English language proficiency test and made comparisons with the paper based version of the same test. Classification analysis was carried out to estimate classification accuracy (CA) and classification consistency (CC) by applying different locations and numbers of cutoff points. The results showed that classification was suitable when a single cutoff score was used, particularly for high- and low-ability test takers. Classification performance declined significantly when multiple cutoff points were simultaneously employed. Content analysis also raised important questions about construct coverage in CAT. The results highlight the potential for CAT to serve classification purposes and outline avenues for further research.