Segment object using one example without training using target-guided attention with PerSAM

Segment object using one example without training using target-guided attention with PerSAM

Personalize Segment Anything Model with One Shot
arXiv paper abstract https://arxiv.org/abs/2305.03048
arXiv PDF paper https://arxiv.org/pdf/2305.03048.pdf
GitHub https://github.com/ZrrSkywalker/Personalize-SAM

Driven by large-data pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful and promptable framework, revolutionizing the segmentation models.

Despite the generality, customizing SAM for specific visual concepts without man-powered prompting is under explored, e.g., automatically segmenting your pet dog in different images.

… propose a training-free Personalization approach for SAM, termed as PerSAM.

Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior, and segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement.

… further … present an efficient one-shot fine-tuning variant, PerSAM-F.

Freezing the entire SAM, … introduce two learnable weights for multi-scale masks, only training 2 parameters within 10 seconds for improved performance …

Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website

LinkedIn https://www.linkedin.com/in/morris-lee-47877b7b

Photo by Lucas van Oort on Unsplash

--

--

AI News Clips by Morris Lee: News to help your R&D

A computer vision consultant in artificial intelligence and related hitech technologies 37+ years. Am innovator with 66+ patents and ready to help a firm's R&D.