Training with modified data like using 10 times more data

--

Training with modified data like using 10 times more data

How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
arXiv paper abstract https://arxiv.org/abs/2106.10270
arXiv PDF paper https://arxiv.org/pdf/2106.10270.pdf
Google code https://github.com/google-research/vision_transformer
PyTorch code https://github.com/rwightman/pytorch-image-models

To help with augmention, AugLy https://github.com/facebookresearch/AugLy has over 100 augmentations for images, video, text, and audio. They also include emoji, fonts, and screenshot templates to assist in augmentation.

Vision Transformers (ViT) have been shown to attain highly competitive performance for a wide range of vision applications, such as image classification, object detection and semantic image segmentation.

… empirical study in order to better understand the interplay between the amount of training data, AugReg, model size and compute budget.

… find that the combination of increased compute and AugReg can yield models with the same performance as models trained on an order of magnitude more training data: we train ViT models of various sizes on the public ImageNet-21k dataset which either match or outperform their counterparts trained on the larger, but not publicly available JFT-300M dataset.

Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website

LinkedIn https://www.linkedin.com/in/morris-lee-47877b7b

Photo by Rostyslav Savchyn on Unsplash

--

--

AI News Clips by Morris Lee: News to help your R&D
AI News Clips by Morris Lee: News to help your R&D

Written by AI News Clips by Morris Lee: News to help your R&D

A computer vision consultant in artificial intelligence and related hitech technologies 37+ years. Am innovator with 66+ patents and ready to help a firm's R&D.

No responses yet