Context-Aware Residual Network with Promotion Gates for Single Image Super-Resolution

Abstract

Deep learning models have achieved significant success in various vision-based applications. However, directly applying deep architectures for single image super-resolution (SISR) results in poor visual effects, such as blurry patches and loss of details, primarily because low-frequency information is treated ambiguously across different patches and channels. To address this issue, we propose a novel context-aware deep residual network with promotion gates, named G-CASR, for SISR. The G-CASR network consists of a sequence of G-CASR modules designed to transform low-resolution features into high-informative features. Each module incorporates a dual-attention residual block (DRB) that captures rich and varying context information through spatial and channel attention. A promotion gate (PG) is applied in each module to analyze the inherent characteristics of input data, enhancing contributive information while suppressing irrelevant data. Experiments on five public datasets (Set5, Set14, B100, Urban100, and Manga109) show that G-CASR outperforms recent methods like SRCNN, VDSR, lapSRN, and EDSR with an average improvement of 1.112 for PSNR and 0.0255 for SSIM. Additionally, the G-CASR model requires only about 25% of the memory cost compared to EDSR.

Publication
MultiMedia Modeling: 26th International Conference, MMM 2020
Yirui Wu
Yirui Wu
Young Professor, CCF Senior Member

My research interests include Computer Vision, Artifical Intelligence, Multimedia Computing and Intelligent Water Conservancy.