Browsing by Subject "computerized adaptive testing"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Unknown Comparability of scores from cat and paper and pencil implementations of student selection examination to higher education(2015-05) Ayhan, Ayşe SaymanThe purpose of this study was to investigate the possibility of computerized adaptive testing (CAT) format as an alternative to the paper and pencil (P&P) test of the student selection examination (SSE) in Turkey. The scores obtained from both P&P format of the SSE and CAT through post-hoc simulations were compared using science subtest items. Different test termination rules (fixed length and fixed standard error) and ability estimation methods (EAP and MLE) were used to operate the CAT version of the SSE P&P test. 10, 15 and 25 items were used as fixed length test and standard errors of 0.30, 0.20 and 0.10 were used as fixed standard error thresholds in terms of test termination rules. Results indicated significant correlations between scores from SSE and CAT. The comparisons between results obtained from CAT and P&P tests also revealed that there exists similar ability distributions and significant reduction in the number of items used through CAT. The findings from the research showed that CAT could calculate reliability using fewer items than P&P test. This study suggests that CAT can be an alternative to SSE with comparable scores to P&P format.Item Unknown IRT-based classification analysis of an english language reading proficiency subtest(SAGE, 2022) Kaya, Elif; O’Grady, Stefan; Kalender, İlkerLanguage proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive testing (CAT). Using real data simulations, the current study investigated the classification performance of CAT on the reading section of an English language proficiency test and made comparisons with the paper based version of the same test. Classification analysis was carried out to estimate classification accuracy (CA) and classification consistency (CC) by applying different locations and numbers of cutoff points. The results showed that classification was suitable when a single cutoff score was used, particularly for high- and low-ability test takers. Classification performance declined significantly when multiple cutoff points were simultaneously employed. Content analysis also raised important questions about construct coverage in CAT. The results highlight the potential for CAT to serve classification purposes and outline avenues for further research.