Segment scene using vision-language models for diverse semantic knowledge with SemiVL

--

Segment scene using vision-language models for diverse semantic knowledge with SemiVL

SemiVL: Semi-Supervised Semantic Segmentation with Vision-Language Guidance
arXiv paper abstract https://arxiv.org/abs/2311.16241
arXiv PDF paper https://arxiv.org/pdf/2311.16241.pdf
GitHub https://github.com/google-research/semivl

In semi-supervised semantic segmentation, a model is trained with a limited number of labeled images along with a large corpus of unlabeled images to reduce the high annotation effort.

While previous methods are able to learn good segmentation boundaries, they are prone to confuse classes with similar visual appearance due to the limited supervision.

On the other hand, vision-language models (VLMs) are able to learn diverse semantic knowledge from image-caption datasets but produce noisy segmentation due to the image-level training.

In SemiVL, … propose to integrate rich priors from VLM pre-training into semi-supervised semantic segmentation to learn better semantic decision boundaries.

To adapt the VLM from global to local reasoning, … introduce a spatial fine-tuning strategy for label-efficient learning … design a language-guided decoder to jointly reason over vision and language.

… SemiVL … significantly outperforms previous semi-supervised methods …

Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website

LinkedIn https://www.linkedin.com/in/morris-lee-47877b7b

Photo by Katrin Hauf on Unsplash

--

--

AI News Clips by Morris Lee: News to help your R&D
AI News Clips by Morris Lee: News to help your R&D

Written by AI News Clips by Morris Lee: News to help your R&D

A computer vision consultant in artificial intelligence and related hitech technologies 37+ years. Am innovator with 66+ patents and ready to help a firm's R&D.

No responses yet