Get background even when have a moving camera using transformers with DeepMCBM
Get background even when have a moving camera using transformers with DeepMCBM
A Deep Moving-camera Background Model
arXiv paper abstract https://arxiv.org/abs/2209.07923v1
arXiv PDF paper https://arxiv.org/pdf/2209.07923v1.pdf
GitHub https://github.com/BGU-CS-VIL/DeepMCBM
In video analysis, background models have many applications such as background/foreground separation, change detection, anomaly detection, tracking, and more.
… in … Moving-camera Background Model (MCBM), the success has been far more modest due to algorithmic and scalability challenges that arise due to the camera motion.
… proposes a new method, called DeepMCBM, that eliminates all the aforementioned issues and achieves state-of-the-art results.
… propose a new strategy for joint alignment that lets us use a spatial transformer net with neither a regularization nor any form of specialized (and non-differentiable) initialization.
Coupled with an autoencoder conditioned on unwarped robust central moments (obtained from the joint alignment), this yields an end-to-end regularization-free MCBM that supports a broad range of camera motions and scales gracefully.
… demonstrate DeepMCBM’s utility on a variety of videos, including ones beyond the scope of other methods …
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
LinkedIn https://www.linkedin.com/in/morris-lee-47877b7b