Benchmarking of hyperparameter optimization techniques for machine learning applications in production
Machine learning (ML) has become a key technology to leverage the potential of large data amounts that are generated in the context of digitized and connected production processes. In projects for developing ML solutions for production applications, the selection of hyperparameter optimization (HPO) techniques is a key task that significantly impacts the performance of the resulting ML solution. However, selecting the best suitable HPO technique for an ML use case is challenging, since HPO techniques have individual strengths and weaknesses and ML use cases in production are highly individual in terms of their application areas, objectives, and resources. This makes the selection of HPO techniques in production a very complex task that requires decision support. Thus, we present a structured approach for benchmarking HPO techniques and for integrating the empirical data generated within benchmarking experiments into decision support systems. Based on the data generated within a large-scale benchmarking study, the validation results prove that the usage of benchmarking data improves decision-making in HPO technique selection and thus helps to exploit the full potential of ML solutions in production applications.