eess.IV - 2023-09-18

Mixed Graph Signal Analysis of Joint Image Denoising / Interpolation

  • paper_url: http://arxiv.org/abs/2309.10114
  • repo_url: None
  • paper_authors: Niruhan Viswarupan, Gene Cheung, Fengbo Lan, Michael Brown
  • for: 这个论文主要是研究如何jointly optimize denoising and interpolation of images from a mixed graph filtering perspective.
  • methods: 作者使用了linear denoiser和linear interpolator,并研究了在哪些情况下这两个操作应该独立执行,或者合并并且优化。
  • results: 实验表明,作者的 JOINT denoising / interpolation方法在比较不同的情况下都能够显著超过独立的方法。
    Abstract A noise-corrupted image often requires interpolation. Given a linear denoiser and a linear interpolator, when should the operations be independently executed in separate steps, and when should they be combined and jointly optimized? We study joint denoising / interpolation of images from a mixed graph filtering perspective: we model denoising using an undirected graph, and interpolation using a directed graph. We first prove that, under mild conditions, a linear denoiser is a solution graph filter to a maximum a posteriori (MAP) problem regularized using an undirected graph smoothness prior, while a linear interpolator is a solution to a MAP problem regularized using a directed graph smoothness prior. Next, we study two variants of the joint interpolation / denoising problem: a graph-based denoiser followed by an interpolator has an optimal separable solution, while an interpolator followed by a denoiser has an optimal non-separable solution. Experiments show that our joint denoising / interpolation method outperformed separate approaches noticeably.
    摘要 受噪图像经常需要 interpolate。给定一个线性去噪器和一个线性 interpolator,当should these operations be independently executed in separate steps,和when should they be combined and jointly optimized?我们研究图像 joint denoising / interpolation from a mixed graph filtering perspective:我们模型denoising using an undirected graph,并 interpolating using a directed graph。我们首先证明,在某些条件下,一个线性去噪器是一个解 graph filter to a maximum a posteriori (MAP) problem regularized using an undirected graph smoothness prior,而一个线性 interpolator是一个解 to a MAP problem regularized using a directed graph smoothness prior。接着,我们研究了两种 joint interpolation / denoising problem variant:一个图像-based denoiser followed by an interpolator has an optimal separable solution,而一个 interpolator followed by a denoiser has an optimal non-separable solution。实验表明,我们的 joint denoising / interpolation method noticeably outperformed separate approaches。

MAD: Meta Adversarial Defense Benchmark

  • paper_url: http://arxiv.org/abs/2309.09776
  • repo_url: None
  • paper_authors: X. Peng, D. Zhou, G. Sun, J. Shi, L. Wu
  • For: + The paper aims to address the limitations of existing adversarial training (AT) methods, such as high computational cost, low generalization ability, and the dilemma between the original model and the defense model.* Methods: + The paper proposes a novel benchmark called meta adversarial defense (MAD), which consists of two MAD datasets and a MAD evaluation protocol. + The paper introduces a meta-learning based adversarial training (Meta-AT) algorithm as the baseline, which features high robustness to unseen adversarial attacks through few-shot learning.* Results: + The paper shows that the Meta-AT algorithm outperforms state-of-the-art methods in terms of robustness to adversarial attacks. + The paper also demonstrates that the model after Meta-AT maintains a relatively high clean-samples classification accuracy (CCA).Here is the simplified Chinese text for the three key information points:* For: + 本文目标是解决现有的针对性训练(AT)方法存在的三大限制,包括高计算成本、低泛化能力和模型与防御模型之间的矛盾。* Methods: + 本文提出了一个新的静态防御(MAD) benchmark,包括两个 MAD 数据集和一个 MAD 评估协议。 + 本文引入了一种基于 meta-学 的针对性训练(Meta-AT)算法作为基准,该算法通过几个 adversarial 攻击实现了高度的鲁棒性。* Results: + 本文表明 Meta-AT 算法在针对性攻击方面的性能明显超过了现有的方法。 + 本文还表明模型 после Meta-AT 保持了相对较高的清样分类精度(CCA)。
    Abstract Adversarial training (AT) is a prominent technique employed by deep learning models to defend against adversarial attacks, and to some extent, enhance model robustness. However, there are three main drawbacks of the existing AT-based defense methods: expensive computational cost, low generalization ability, and the dilemma between the original model and the defense model. To this end, we propose a novel benchmark called meta adversarial defense (MAD). The MAD benchmark consists of two MAD datasets, along with a MAD evaluation protocol. The two large-scale MAD datasets were generated through experiments using 30 kinds of attacks on MNIST and CIFAR-10 datasets. In addition, we introduce a meta-learning based adversarial training (Meta-AT) algorithm as the baseline, which features high robustness to unseen adversarial attacks through few-shot learning. Experimental results demonstrate the effectiveness of our Meta-AT algorithm compared to the state-of-the-art methods. Furthermore, the model after Meta-AT maintains a relatively high clean-samples classification accuracy (CCA). It is worth noting that Meta-AT addresses all three aforementioned limitations, leading to substantial improvements. This benchmark ultimately achieved breakthroughs in investigating the transferability of adversarial defense methods to new attacks and the ability to learn from a limited number of adversarial examples. Our codes and attacked datasets address will be available at https://github.com/PXX1110/Meta_AT.
    摘要 translate_text=" Adversarial 训练(AT)是深度学习模型防御 против敌意攻击的一种常见技术,同时可以提高模型的鲁棒性。然而,现有的 AT 防御方法存在三个主要缺点:计算成本高、适应能力低、模型与防御模型之间的 contradicton。为此,我们提出了一个新的benchmark,即元敌意防御(MAD)。MAD benchmark 包括两个 MAD 数据集,以及一个 MAD 评估协议。这两个大规模 MAD 数据集通过对 MNIST 和 CIFAR-10 数据集进行实验而生成。此外,我们引入了基于元学习的 adversarial 训练(Meta-AT)算法,它具有高度鲁棒性,能够通过少量学习对不同攻击来鲁棒化。实验结果表明,我们的 Meta-AT 算法与现状卷积方法相比,有较高的鲁棒性。此外,模型 después Meta-AT 保持了较高的清样分类精度(CCA)。值得注意的是,Meta-AT 解决了所有三种限制,带来了显著的改进。这个benchmark最终实现了对敌意防御方法的传播性和从少量敌意示例中学习的能力进行研究。我们的代码和攻击数据集将在 上发布。Here's the translation:这是一个文章,描述了一个新的benchmark,即元敌意防御(MAD),用于测试深度学习模型的防御能力。现有的防御方法有三个主要缺点:计算成本高、适应能力低、模型与防御模型之间的矛盾。为此,我们提出了一个新的Meta-AT算法,具有高度鲁棒性,能够通过少量学习对不同攻击来鲁棒化。实验结果显示,我们的 Meta-AT 算法与现状卷积方法相比,有较高的鲁棒性。此外,模型 después Meta-AT 保持了较高的清样分类精度(CCA)。值得注意的是,Meta-AT 解决了所有三种限制,带来了显著的改进。这个benchmark最终实现了对敌意防御方法的传播性和从少量敌意示例中学习的能力进行研究。我们的代码和攻击数据集将在 上发布。