Get 3D object geometry and new views from 2 images by getting consistent scenes with SparseFusion

--

Get 3D object geometry and new views from 2 images by getting consistent scenes with SparseFusion

SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction
arXiv paper abstract https://arxiv.org/abs/2212.00792
arXiv PDF paper https://arxiv.org/pdf/2212.00792.pdf
Project page https://sparsefusion.github.io

… propose SparseFusion, a sparse view 3D reconstruction approach that unifies recent advances in neural rendering and probabilistic image generation.

Existing approaches typically build on neural rendering with re-projected features but fail to generate unseen regions or handle uncertainty under large viewpoint changes.

Alternate methods treat this as a (probabilistic) 2D synthesis task, and while they can generate plausible 2D images, they do not infer a consistent underlying 3D.

… show that geometric consistency and generative inference can be complementary in a mode-seeking behavior.

By distilling a 3D consistent scene representation from a view-conditioned latent diffusion model, … are able to recover a plausible 3D representation whose renderings are both accurate and realistic.

… show that it outperforms existing methods, in both distortion and perception metrics, for sparse-view novel view synthesis.

Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website

LinkedIn https://www.linkedin.com/in/morris-lee-47877b7b

Photo by Nathan Dumlao on Unsplash

--

--

AI News Clips by Morris Lee: News to help your R&D
AI News Clips by Morris Lee: News to help your R&D

Written by AI News Clips by Morris Lee: News to help your R&D

A computer vision consultant in artificial intelligence and related hitech technologies 37+ years. Am innovator with 66+ patents and ready to help a firm's R&D.

No responses yet