Evaluation of move method refactorings recommendation algorithms: are we doing it right?

Evgenii Novozhilov, Ivan Veselov, Michail Pravilov, Timofey Bryksin

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Previous studies introduced various techniques for detecting Move Method refactoring opportunities. However, different authors have different evaluations, which leads to the fact that results reported by different papers do not correlate with each other and it is almost impossible to understand which algorithm works better in practice. In this paper, we provide an overview of existing evaluation approaches for Move Method refactoring recommendation algorithms, as well as discuss their advantages and disadvantages. We propose a tool that can be used for generating large synthetic datasets suitable for both algorithms evaluation and building complex machine learning models for Move Method refactoring recommendation
Original languageEnglish
Title of host publicationIWOR '19: Proceedings of the 3rd International Workshop on Refactoring
Pages23-26
ISBN (Electronic)9781728122700
DOIs
StatePublished - May 2019
Event3rd International Workshop on Refactoring - Montreal, Quebec, Canada
Duration: 28 May 201928 May 2019

Conference

Conference3rd International Workshop on Refactoring
Abbreviated titleIWOR '19
CountryCanada
CityMontreal, Quebec
Period28/05/1928/05/19

Scopus subject areas

  • Software
  • Safety, Risk, Reliability and Quality

Keywords

  • Algorithms evaluation
  • Automatic refactoring recommendation
  • Code smells
  • Dataset generation
  • Feature envy
  • Move method refactoring

Fingerprint

Dive into the research topics of 'Evaluation of move method refactorings recommendation algorithms: are we doing it right?'. Together they form a unique fingerprint.

Cite this