Browsing by Subject "Inter-rater reliability"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Open Access Reliability-related issues in the context of student evaluations of teaching in higher education(Sciedu Press, 2015) Kalender, İ.Student evaluations of teaching (SET) have been the principal instrument to elicit students’ opinions in higher education institutions. Many decisions, including high-stake ones, are made based on SET scores reported by students. In this respect, reliability of SET scores is of considerable importance. This paper has an argument that there are some problems in choosing and using of reliability indices in SET context. Three hypotheses were tested: (i) using internal consistency measures is misleading in SET context since the variability is mainly due to disagreement between students’ ratings, which requires use of inter-rater reliability coefficients, (ii) number of minimum feedbacks is not achieved in most of the classes, resulting unreliable decisions, and (iii) calculating reliability coefficient assuming a common factor structure across all classes is misleading because a common model may not be tenable for all. Results showed that use of internal consistency only to assess reliability of SET scores may result in wrong decisions. Considerable large numbers of missing feedbacks were observed to achieve acceptable reliability levels. Findings also indicated that factorial model differed across several groups.Item Open Access Writing portfolio assessment and inter-rater reliability at Yıldız Technical University School of Foreign Languages Basic English Department(2005) Türkkorur, AsumanThis research study investigated the use of writing portfolios and their assessment by raters. In particular it compared the inter-rater reliability of the portfolio assessment criteria currently in use and the new portfolio assessment criteria proposed for Yıldız Technical University, School of Foreign Languages, Basic English Department. The perspectives of the participants on the portfolio assessment scheme and the criteria were also analyzed. This study was conducted at Yıldız Technical University, School of Foreign Languages, Basic English Department in the spring semester of 2005. Data were collected through portfolio grading sessions, focus group discussions and individual interviews. The participants in the study were seven English writing instructors currently working at Yıldız Technical University, School of Foreign Languages, Basic English Department. The instructors scored twelve student portfolios on two different sessions using the criteria customarily used in the institution and the new analytic criteria. Focus group discussions were held before and after the grading sessions. At the end of the grading sessions, instructors were interviewed individually. Grading sessions, focus group discussions and interviews were audiotaped and transcribed. The inter-rater reliability for both of the criteria types was calculated and found to be marginal. The results of the statistical analysis revealed that there was no difference in results of inter-rater reliability between the groups in both of the grading sessions. However, analysis of the focus group discussion and interviews indicated that instructors would appreciate some form of more standardized, analytic and reliable criteria for portfolio grading.