paper_authors: Alexandra Malyugina, Nantheera Anantrasirichai, David Bull
for: 提高图像减雷的效果,增强图像的对比度和保留Texture信息
methods: 提出一种新的减雷损失函数,包括图像结构信息和常见的深度学习任务中的空间信息
results: 对BVI-Lowlight dataset进行训练,并在LPIPS metric中提高了25%,表明提出的损失函数能够更好地训练神经网络,提高图像减雷的效果。Abstract
Despite extensive research conducted in the field of image denoising, many algorithms still heavily depend on supervised learning and their effectiveness primarily relies on the quality and diversity of training data. It is widely assumed that digital image distortions are caused by spatially invariant Additive White Gaussian Noise (AWGN). However, the analysis of real-world data suggests that this assumption is invalid. Therefore, this paper tackles image corruption by real noise, providing a framework to capture and utilise the underlying structural information of an image along with the spatial information conventionally used for deep learning tasks. We propose a novel denoising loss function that incorporates topological invariants and is informed by textural information extracted from the image wavelet domain. The effectiveness of this proposed method was evaluated by training state-of-the-art denoising models on the BVI-Lowlight dataset, which features a wide range of real noise distortions. Adding a topological term to common loss functions leads to a significant increase in the LPIPS (Learned Perceptual Image Patch Similarity) metric, with the improvement reaching up to 25\%. The results indicate that the proposed loss function enables neural networks to learn noise characteristics better. We demonstrate that they can consequently extract the topological features of noise-free images, resulting in enhanced contrast and preserved textural information.
摘要
尽管在图像噪声除除领域进行了广泛的研究,许多算法仍然依赖于指导学习,其效果主要取决于训练数据的质量和多样性。通常认为数字图像扭曲是由空间不变的加速白噪声(AWGN)引起的,但是分析实际数据表示这个假设是无效的。因此,这篇论文通过图像扭曲实际噪声,提供了一个捕捉图像下的结构信息以及传统用于深度学习任务中的空间信息的框架。我们提议一种新的减噪损失函数,该函数包含拓扑 invariants 和通过图像振荡频谱领域提取的文本信息。我们通过在 BVI-Lowlight 数据集上训练现状顶峰的减噪模型,评估了该提议的效果。添加拓扑项到常见损失函数后,LPIPS(学习感知图像补充相似度)指标上的提升可达 25%。结果表明,我们的损失函数使得神经网络学习噪声特征更好。我们示出,神经网络可以根据噪声free图像的拓扑特征提取图像的增强对比度和保持文本信息。