The effect of raters' prior knowledge of students' proficiency levels on their assessment during oral interviews
Date
Authors
Editor(s)
Advisor
Supervisor
Co-Advisor
Co-Supervisor
Instructor
BUIR Usage Stats
views
downloads
Series
Abstract
This quasi-experimental study, focusing on scorer reliability in oral interview assessments, aims to investigate the possible existence of rater bias and the effect(s), if any, of the raters’ prior knowledge of students’ proficiency levels on rater scorings. With this aim, the study was carried out in two sessions as pre and post-test with 15 English as a foreign language (EFL) instructors who also perform as raters in the oral assessments at a Turkish state university where the study was conducted. The researcher selected six videos as rating materials recorded during 2011- 2012 academic year proficiency exam at the same university. Each of these videos included the oral interview performances of two students. The data collection started with a norming session in which the scores the raters assigned for the performances of four students recorded in two extra videos were discussed for standardization. After the norming session, using an analytic rubric, the participants performed individually as raters in the pre and post-test between which there was at least five week interval. In both the pre and post-test, the raters were asked to provide verbal reports about what they thought while assigning scores to these 12 students from three different proficiency levels. While no information about students’ proficiency levels were provided to the raters in the pre-test, the raters were informed about students’ levels both in oral and written format in the post-test. The scores the raters assigned were filed, and the think-alouds were video-recorded for data analysis. As a result, quantitative data analysis from the pre and post-test scores indicated that there was a statistically significant difference between the pre and post-test scorings of eight raters assigned to different components of the rubric such as Vocabulary, Comprehension, or Total Scores which represented the final score each student received. Further analysis on all the Total Scores assigned for individual students by these 15 raters revealed that compared to pre-test scores, ranging from one point difference to more than 10 points, 75 % of the Total Scores assigned by these raters ranked lower or higher in the post-test while 25 % did not change. When all the raters’ verbal reports were thematically analyzed in relation to the scores they assigned and the references they made to the students’ proficiency levels, it was observed that 11 raters referred to the proficiency levels of the students while assigning scores in the post-test. Furthermore, the Total Scores assigned for each group of students each of which consisted from a different proficiency level were analyzed, and the results indicated that the raters differed in their degree of severity/leniency while assigning scores for lower and higher level students.