Ideal step size estimation for the multinomial logistic regression
No Thumbnail Available
Date
2025-01-22
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Pontificia Universidad Católica del Perú
Abstract
En la base de los problemas de optimización en aprendizaje profundo residen algoritmos como el Gradiente Descendiente Estocástico (SGD, por sus siglas en inglés), el cual emplea un subconjunto de los datos por iteración para estimar el gradiente con el fin de minimizar una función de costo. Los algoritmos adaptativos, basados en el SGD, son ampliamente reconocidos por su efectividad al utilizar la información del gradiente de iteraciones previas, generando un momento o memoria que permite una predicción más precisa de la pendiente real del gradiente en iteraciones futuras, acelerando así la convergencia. No obstante, estos algoritmos aún requieren una tasa de aprendizaje (learning rate o LR) inicial (escalar), así como un programador de LR.
En este trabajo proponemos un nuevo algoritmo de SGD que estima la LR inicial (escalar) mediante una adaptación del tamaño de paso ideal de Cauchy para la regresión logística multinomial; además, la LR se actualiza de manera recursiva hasta un número determinado de épocas, tras lo cual se emplea un programador de LR decreciente. El método propuesto se evalúa en varias arquitecturas de clasificación multiclase bien conocidas y se compara favorablemente con otras alternativas adaptativas (escalares y espaciales) bien optimizadas, incluyendo el algoritmo Adam.
At the core of deep learning optimization problems reside algorithms such as the Stochastic Gradient Descent (SGD), which employs a subset of the data per iteration to estimate the gradient in order to minimize a cost function. Adaptive algorithms, based on SGD, are well known for being effective in using gradient information from past iterations, generating momentum or memory that enables a more accurate prediction of the true gradient slope in future iterations, thus accelerating convergence. Nevertheless, these algorithms still need an initial (scalar) learning rate (LR) as well as a LR scheduler. In this work we propose a new SGD algorithm that estimates the initial (scalar) LR via an adaptation of the ideal Cauchy step size for the multinomial logistic regression; furthermore, the LR is recursively updated up to a given number of epochs, after which a decaying LR scheduler is used. The proposed method is assessed for several well-known multiclass classification architectures and favorably compares against other well-tuned (scalar and spatially) adaptive alternatives, including the Adam algorithm.
At the core of deep learning optimization problems reside algorithms such as the Stochastic Gradient Descent (SGD), which employs a subset of the data per iteration to estimate the gradient in order to minimize a cost function. Adaptive algorithms, based on SGD, are well known for being effective in using gradient information from past iterations, generating momentum or memory that enables a more accurate prediction of the true gradient slope in future iterations, thus accelerating convergence. Nevertheless, these algorithms still need an initial (scalar) learning rate (LR) as well as a LR scheduler. In this work we propose a new SGD algorithm that estimates the initial (scalar) LR via an adaptation of the ideal Cauchy step size for the multinomial logistic regression; furthermore, the LR is recursively updated up to a given number of epochs, after which a decaying LR scheduler is used. The proposed method is assessed for several well-known multiclass classification architectures and favorably compares against other well-tuned (scalar and spatially) adaptive alternatives, including the Adam algorithm.
Description
Keywords
Aprendizaje automático (Inteligencia artificial), Aprendizaje profundo (Aprendizaje automático), Optimización matemática, Análisis de regresión
Citation
Endorsement
Review
Supplemented By
Referenced By
Creative Commons license
Except where otherwised noted, this item's license is described as info:eu-repo/semantics/openAccess