English Language Preparatory Program
Permanent URI for this collectionhttps://hdl.handle.net/11693/115552
Browse
Browsing English Language Preparatory Program by Author "O’Grady, Stefan"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Open Access Adapting multiple-choice comprehension question formats in a test of second language listening comprehension(Sage Publications, 2021) O’Grady, StefanThe current study explores the impact of varying multiple-choice question preview and presentation formats in a test of second language listening proficiency targeting different levels of text comprehension. In a between-participant design, participants completed a 30-item test of listening comprehension featuring implicit and explicit information comprehension questions under one of four multiple-choice question preview and presentation conditions. Interactions between preview, presentation and comprehension in the participants’ test scores were analysed using many facet Rasch analysis. The results suggest that the measurement of participants’ listening ability was directly influenced by the presentation of multiple-choice questions. Test scores were highest when participants were able to preview the multiple-choice question stems before the sound file and listened to the options after the text had completed. However, interactions between preview and presentation conditions and comprehension level were only statistically significant in an analysis of the low scoring students’ item responses, which were more frequently correct when preview of item stems was available for questions targeting comprehension of implicit information. The research underscores the importance of accounting for test design when making inferences about language learners’ listening ability and will be of interest to teachers, practitioners and researchers developing listening assessment tasks.Item Open Access The impact of pre-task planning on speaking test performance for English-medium university admission(Sage Publications, 2019-03) O’Grady, StefanThis study investigated the impact of different lengths of pre-task planning time on performance in a test of second language speaking ability for university admission. In the study, 47 Turkish-speaking learners of English took a test of English language speaking ability. The participants were divided into two groups according to their language proficiency, which was estimated through a paper-based English placement test. They each completed four monologue tasks: two picture-based narrative tasks and two description tasks. In a balanced design, each test taker was allowed a different length of planning time before responding to each of the four tasks. The four planning conditions were 30 seconds, 1 minute, 5 minutes, and 10 minutes. Trained raters awarded scores to the test takers using an analytic rating scale and a context-specific, binary-choice rating scale, designed specifically for the study. The results of the rater scores were analysed by using a multifaceted Rasch measurement. The impact of pre-task planning on test scores was found to be influenced by four variables: the rating scale; the task type that test takers completed; the length of planning time provided; and the test takers’ levels of proficiency in the second language. Increases in scores were larger on the picture-based narrative tasks than on the two description tasks. The results also revealed a relationship between proficiency and pre-task planning, whereby statistical significance was only reached for the increases in the scores of the lowest-level test takers. Regarding the amount of planning time, the 5-minute planning condition led to the largest overall increases in scores. The research findings offer contributions to the study of pre-task planning and will be of particular interest to institutions seeking to assess the speaking ability of prospective students in English-medium educational environments.Item Open Access IRT-based classification analysis of an english language reading proficiency subtest(SAGE, 2022) Kaya, Elif; O’Grady, Stefan; Kalender, İlkerLanguage proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive testing (CAT). Using real data simulations, the current study investigated the classification performance of CAT on the reading section of an English language proficiency test and made comparisons with the paper based version of the same test. Classification analysis was carried out to estimate classification accuracy (CA) and classification consistency (CC) by applying different locations and numbers of cutoff points. The results showed that classification was suitable when a single cutoff score was used, particularly for high- and low-ability test takers. Classification performance declined significantly when multiple cutoff points were simultaneously employed. Content analysis also raised important questions about construct coverage in CAT. The results highlight the potential for CAT to serve classification purposes and outline avenues for further research.