Super-resolution image by using semantics to reconstruct details with IG-CFAT
Super-resolution image by using semantics to reconstruct details with IG-CFAT
IG-CFAT: An Improved GAN-Based Framework for Effectively Exploiting Transformers in Real-World Image Super-Resolution
arXiv paper abstract https://arxiv.org/abs/2406.13815
arXiv PDF paper https://arxiv.org/pdf/2406.13815
In the field of single image super-resolution (SISR), transformer-based models, have demonstrated significant advancements.
… Recently, composite fusion attention transformer (CFAT), outperformed previous state-of-the-art (SOTA) models in classic image super-resolution.
This paper extends the CFAT model to an improved GAN-based model called IG-CFAT to effectively exploit the performance of transformers in real-world image super-resolution.
IG-CFAT incorporates a semantic-aware discriminator to reconstruct … details … accurately, … improving … quality … utilizes an adaptive degradation … to … simulate … degradations.
… methodology adds wavelet losses to conventional loss functions of GAN-based super-resolution models to reconstruct high-frequency details more efficiently.
… IG-CFAT sets new benchmarks in real-world image super-resolution, outperforming SOTA models in both quantitative and qualitative metrics.
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website