Investigating the validity of ground truth in code reviewer recommendation studies

buir.contributor.authorDoğan, Emre
buir.contributor.authorTüzün, Eray
buir.contributor.authorTecimer, Kazım Ayberk
buir.contributor.authorGüvenir, Halil Altay
dc.contributor.authorDoğan, Emreen_US
dc.contributor.authorTüzün, Erayen_US
dc.contributor.authorTecimer, Kazım Ayberken_US
dc.contributor.authorGüvenir, Halil Altayen_US
dc.coverage.spatialPorto de Galinhas, Recife, Brazilen_US
dc.date.accessioned2020-01-27T05:54:27Zen_US
dc.date.available2020-01-27T05:54:27Zen_US
dc.date.issued2019en_US
dc.departmentDepartment of Computer Engineeringen_US
dc.descriptionDate of Conference: 19-20 September 2019en_US
dc.descriptionConference Name: 13th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ESEM 2019en_US
dc.description.abstractBackground: Selecting the ideal code reviewer in modern code review is a crucial first step to perform effective code reviews. There are several algorithms proposed in the literature for recommending the ideal code reviewer for a given pull request. The success of these code reviewer recommendation algorithms is measured by comparing the recommended reviewers with the ground truth that is the assigned reviewers selected in real life. However, in practice, the assigned reviewer may not be the ideal reviewer for a given pull request.Aims: In this study, we investigate the validity of ground truth data in code reviewer recommendation studies.Method: By conducting an informal literature review, we compared the reviewer selection heuristics in real life and the algorithms used in recommendation models. We further support our claims by using empirical data from code reviewer recommendation studies.Results: By literature review, and accompanying empirical data, we show that ground truth data used in code reviewer recommendation studies is potentially problematic. This reduces the validity of the code reviewer datasets and the reviewer recommendation studies. Conclusion: We demonstrated the cases where the ground truth in code reviewer recommendation studies are invalid and discussed the potential solutions to address this issue.en_US
dc.identifier.doi10.1109/ESEM.2019.8870190en_US
dc.identifier.eisbn9781728129686en_US
dc.identifier.eissn1949-3789en_US
dc.identifier.isbn9781728129693en_US
dc.identifier.issn1949-3770en_US
dc.identifier.urihttp://hdl.handle.net/11693/52823en_US
dc.language.isoEnglishen_US
dc.publisherIEEE Computer Societyen_US
dc.relation.isversionofhttps://dx.doi.org/10.1109/ESEM.2019.8870190en_US
dc.source.titleInternational Symposium on Empirical Software Engineering and Measurementen_US
dc.subjectReviewer recommendationen_US
dc.subjectGround truthen_US
dc.subjectCognitive biasen_US
dc.subjectAttribute substitutionen_US
dc.subjectSystematic noiseen_US
dc.subjectThreats to validityen_US
dc.titleInvestigating the validity of ground truth in code reviewer recommendation studiesen_US
dc.typeConference Paperen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Investigating_the_validity_of_ground_truth_in_code_reviewer_recommendation_studies.pdf
Size:
119.83 KB
Format:
Adobe Portable Document Format
Description:
View / Download

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: