Facebook AI’s MADGRAD optimizer improves neural network training

Facebook AI’s MADGRAD optimizer improves neural network training

Adaptivity without Compromise: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization
arXiv paper abstract https://arxiv.org/abs/2101.11075
arXiv PDF paper https://arxiv.org/pdf/2101.11075.pdf

We’re introducing an optimizer for deep learning, MADGRAD. This method matches or exceeds the performance of the Adam optimizer across a varied set of realistic large-scale deep learning training problems.

GitHub https://github.com/facebookresearch/madgrad
Read the Docs documentation https://madgrad.readthedocs.io/en/latest

Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website

LinkedIn https://www.linkedin.com/in/morris-lee-47877b7b

Photo by Ty Alvarez on Unsplash

--

--

AI News Clips by Morris Lee: News to help your R&D
AI News Clips by Morris Lee: News to help your R&D

Written by AI News Clips by Morris Lee: News to help your R&D

A computer vision consultant in artificial intelligence and related hitech technologies 37+ years. Am innovator with 66+ patents and ready to help a firm's R&D.

No responses yet