Parallel CNN model with feature fusion for enhanced single image super-resolution
Abstract
Single Image Super-Resolution (SISR) has seen significant advancements with the advent of deep learning techniques. However, many existing approaches face challenges such as high computational costs, poor generalization to unseen data and dependence on large paired datasets. This paper proposes a novel, lightweight Parallel Super-Resolution Convolutional Neural Network (PSRCNN) designed to address these limitations. PSRCNN leverages parallel feature extraction, a transposed convolutional upsampling layer and an efficient feature fusion strategy to balance performance and efficiency. Rigorous evaluations on established benchmark datasets demonstrate that PSRCNN achieves competitive performance, particularly in terms of the Structural Similarity Index (SSIM), a metric closely aligned with human visual perception. Moreover, the model showcases a significant advantage in computational efficiency, requiring fewer parameters than many recent Super-Resolution (SR) methods. PSRCNN presents a promising approach to SISR, demonstrating the potential of parallel CNN architectures for image SR tasks, as validated by ablation studies confirming the effectiveness of this design in enhancing image reconstruction quality. This approach is open to further enhancement.