Browsing by Author "Erdogmus, H."
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access Are computer science and engineering graduates ready for the software industry? Experiences from an industrial student training program(ACM, 2018) Tuzun, Eray; Erdogmus, H.; Ozbilgin, I. G.It has been 50 years since the term "software engineering" was coined in 1968 at a NATO conference. The field should be relatively mature by now, with most established universities covering core software engineering topics in their Computer Science programs and others offering specialized degrees. However, still many practitioners lament a lack of skills in new software engineering hires. With the growing demand for software engineers from the industry, this apparent gap becomes more and more pronounced. One corporate strategy to address this gap is for the industry to develop supplementary training programs before the hiring process, which could also help companies screen viable candidates. In this paper, we report on our experiences and lessons learned in conducting a summer school program aimed at screening new graduates, introducing them to core skills relevant to the organization and industry, and assessing their attitudes toward mastering those skills before the hiring process begins. Our experience suggests that such initiatives can be mutually beneficial for new hires and companies alike. We support this insight with pre- A nd post-training data collected from the participants during the first edition of the summer school and a follow-up questionnaire conducted after a year with the participants, 50% of whom were hired by the company shortly after the summer school.Item Open Access Cleaning ground truth data in software task assignment(Elsevier BV, 2022-05-25) Tecimer, K. A.; Tüzün, Eray; Moran, Cansu; Erdogmus, H.Context: In the context of collaborative software development, there are many application areas of task assignment such as assigning a developer to fix a bug, or assigning a code reviewer to a pull request. Most task assignment techniques in the literature build and evaluate their models based on datasets collected from real projects. The techniques invariably presume that these datasets reliably represent the “ground truth”. In a project dataset used to build an automated task assignment system, the recommended assignee for the task is usually assumed to be the best assignee for that task. However, in practice, the task assignee may not be the best possible task assignee, or even a sufficiently qualified one. Objective: We aim to clean up the ground truth by removing the samples that are potentially problematic or suspect with the assumption that removing such samples would reduce any systematic labeling bias in the dataset and lead to performance improvements. Method: We devised a debiasing method to detect potentially problematic samples in task assignment datasets. We then evaluated the method’s impact on the performance of seven task assignment techniques by comparing the Mean Reciprocal Rank (MRR) scores before and after debiasing. We used two different task assignment applications for this purpose: Code Reviewer Recommendation (CRR) and Bug Assignment (BA). Results: In the CRR application, we achieved an average MRR improvement of 18.17% for the three learning-based techniques tested on two datasets. No significant improvements were observed for the two optimization-based techniques tested on the same datasets. In the BA application, we achieved a similar average MRR improvement of 18.40% for the two learning-based techniques tested on four different datasets. Conclusion: Debiasing the ground truth data by removing suspect samples can help improve the performance of learning-based techniques in software task assignment applications.