•  
  •  
 

Abstract

Automatic machine learning (AutoML) and deep neural networks have many hyperparameters. The recent increasing interest in complex and cost-effective machine learning models has led to the revival of hyperparameter optimization (HPO) research. The beginning of HPO has been around for many years and its popularity has increased with deep learning networks. This article provides important issues related to the revision of the HPO. First, basic hyperparameters related to the training and structure of the model are introduced and their importance and methods for the value range are discussed. Then, it focuses on optimization algorithms and their applicability, especially for deep learning networks, covering their effectiveness and accuracy. Then, it focuses on optimization algorithms and their applicability, especially for deep learning networks, covering their effectiveness and accuracy. At the same time, this study examined the HPO kits that are important for HPO and are preferred by researchers. The most advanced search algorithms of the analyzed HPO kits compare the feasibility and expandability for new modules designed by users with large deep learning tools. Problems that arise when HPO is applied to deep learning algorithms result in prominent approaches for model evaluation with a comparison between optimization algorithms and limited computational resources.

DOI

10.24012/dumf.767700

Included in

Engineering Commons

Share

COinS