Adaptive Gradient Regularization: A Faster and Generalizable Optimization Technique for

Описание к видео Adaptive Gradient Regularization: A Faster and Generalizable Optimization Technique for

Original paper: https://arxiv.org/abs/2407.16944

Title: Adaptive Gradient Regularization: A Faster and Generalizable Optimization Technique for Deep Neural Networks

Authors: Huixiu Jiang, Ling Yang, Yu Bao, Rutong Si, Sikun Yang

Abstract:
Stochastic optimization plays a crucial role in the advancement of deep learning technologies. Over the decades, significant effort has been dedicated to improving the training efficiency and robustness of deep neural networks, via various strategies including gradient normalization (GN) and gradient centralization (GC). Nevertheless, to the best of our knowledge, no one has considered to capture the optimal gradient descent trajectory, by adaptively controlling gradient descent direction. To address this concern, this paper is the first attempt to study a new optimization technique for deep neural networks, using the sum normalization of a gradient vector as coefficients, to dynamically regularize gradients and thus to effectively control optimization direction. The proposed technique is hence named as the adaptive gradient regularization (AGR). It can be viewed as an adaptive gradient clipping method. The theoretical analysis reveals that the AGR can effectively smooth the loss landscape, and hence can significantly improve the training efficiency and model generalization performance. We note that AGR can greatly improve the training efficiency of vanilla optimizers' including Adan and AdamW, by adding only three lines of code. The final experiments conducted on image generation, image classification, and language representation, demonstrate that the AGR method can not only improve the training efficiency but also enhance the model generalization performance.

Комментарии

Информация по комментариям в разработке