eess.IV - 2023-08-02

CMUNeXt: An Efficient Medical Image Segmentation Network based on Large Kernel and Skip Fusion

  • paper_url: http://arxiv.org/abs/2308.01239
  • repo_url: https://github.com/FengheTan9/Medical-Image-Segmentation-Benchmarks
  • paper_authors: Fenghe Tang, Jianrui Ding, Lingtao Wang, Chunping Ning, S. Kevin Zhou
    for:CMUNeXt is designed for medical image segmentation, specifically for fast and accurate auxiliary diagnosis in real scene scenarios.methods:CMUNeXt uses a U-shaped architecture with large kernel and inverted bottleneck design, as well as the Skip-Fusion block to efficiently extract global context information and ensure ample feature fusion.results:CMUNeXt outperforms existing heavyweight and lightweight medical image segmentation networks in terms of segmentation performance, with faster inference speed, lighter weights, and reduced computational cost.Here’s the Chinese version:for:CMUNeXt 是为医疗图像分割设计,特指在真实场景中进行快速准确的辅助诊断。methods:CMUNeXt 使用 U 型架构,大kernel 和反向瓶颈设计,以及 Skip-Fusion 块来高效地提取全局上下文信息并确保较充分的特征融合。results:CMUNeXt 在多个医疗图像数据集上实现了比其他重量级和轻量级医疗图像分割网络更高的分割性能,同时具有更快的推理速度、较轻的权重和降低的计算成本。
    Abstract The U-shaped architecture has emerged as a crucial paradigm in the design of medical image segmentation networks. However, due to the inherent local limitations of convolution, a fully convolutional segmentation network with U-shaped architecture struggles to effectively extract global context information, which is vital for the precise localization of lesions. While hybrid architectures combining CNNs and Transformers can address these issues, their application in real medical scenarios is limited due to the computational resource constraints imposed by the environment and edge devices. In addition, the convolutional inductive bias in lightweight networks adeptly fits the scarce medical data, which is lacking in the Transformer based network. In order to extract global context information while taking advantage of the inductive bias, we propose CMUNeXt, an efficient fully convolutional lightweight medical image segmentation network, which enables fast and accurate auxiliary diagnosis in real scene scenarios. CMUNeXt leverages large kernel and inverted bottleneck design to thoroughly mix distant spatial and location information, efficiently extracting global context information. We also introduce the Skip-Fusion block, designed to enable smooth skip-connections and ensure ample feature fusion. Experimental results on multiple medical image datasets demonstrate that CMUNeXt outperforms existing heavyweight and lightweight medical image segmentation networks in terms of segmentation performance, while offering a faster inference speed, lighter weights, and a reduced computational cost. The code is available at https://github.com/FengheTan9/CMUNeXt.
    摘要 “U字型架构在医疗影像分类网络设计中扮演了关键角色。然而,由于卷积的本质性限制,具有U字型架构的完全卷积分类网络对于精确地Localization of lesions提供了有限的能力。而把CNN和Transformers混合在一起的混合架构,尽管可以解决这些问题,但在实际医疗情况下的Computational resource constraints和Edge devices上的应用受限。此外,卷积的预设假设适合医疗数据的缺乏,Transformer基于的网络将无法得到优化。为了提取全局背景信息而且利用卷积的预设,我们提出了CMUNeXt,一个高效的卷积类型医疗影像分类网络。CMUNeXt使用大kernel和倒置瓶颈设计,具有丰富的全局背景信息混合能力。我们还引入了Skip-Fusion层,以便实现稳定的skip-connection和丰富的特征融合。实验结果显示,CMUNeXt在多个医疗影像数据集上的分类性能高于现有的重量级和轻量级医疗影像分类网络,同时具有更快的推断速度、较轻的条件和reduced computational cost。代码可以在https://github.com/FengheTan9/CMUNeXt中找到。”

High-efficient deep learning-based DTI reconstruction with flexible diffusion gradient encoding scheme

  • paper_url: http://arxiv.org/abs/2308.01173
  • repo_url: None
  • paper_authors: Zejun Wu, Jiechao Wang, Zunquan Chen, Qinqin Yang, Shuhui Cai, Zhong Chen, Congbo Cai
  • for: 用于实现高效的Diffusion Tensor Reconstruction(DTI)方法,以及evaluate this method的效果。
  • methods: 采用动态核函数来嵌入扩散梯度方向信息到相应的扩散信号特征图中,并实现了扩散梯度方向的通用化。
  • results: 相比其他方法,FlexDTI可以成功实现高质量的扩散tensor-derived变量,即使扩散梯度数量和方向是可变的。它提高了平均扩散率(PSNR)约10dB,相对于state-of-the-art的深度学习方法。
    Abstract Purpose: To develop and evaluate a novel dynamic-convolution-based method called FlexDTI for high-efficient diffusion tensor reconstruction with flexible diffusion encoding gradient schemes. Methods: FlexDTI was developed to achieve high-quality DTI parametric mapping with flexible number and directions of diffusion encoding gradients. The proposed method used dynamic convolution kernels to embed diffusion gradient direction information into feature maps of the corresponding diffusion signal. Besides, our method realized the generalization of a flexible number of diffusion gradient directions by setting the maximum number of input channels of the network. The network was trained and tested using data sets from the Human Connectome Project and a local hospital. Results from FlexDTI and other advanced tensor parameter estimation methods were compared. Results: Compared to other methods, FlexDTI successfully achieves high-quality diffusion tensor-derived variables even if the number and directions of diffusion encoding gradients are variable. It increases peak signal-to-noise ratio (PSNR) by about 10 dB on Fractional Anisotropy (FA) and Mean Diffusivity (MD), compared with the state-of-the-art deep learning method with flexible diffusion encoding gradient schemes. Conclusion: FlexDTI can well learn diffusion gradient direction information to achieve generalized DTI reconstruction with flexible diffusion gradient schemes. Both flexibility and reconstruction quality can be taken into account in this network.
    摘要 目的:开发和评估一种基于动态核函数的新方法,称为FlexDTI,用于高效精度Diffusion Tensor Imaging(DTI)重建。方法:FlexDTI通过嵌入扩散梯度方向信息到扩散信号特征图中来实现高质量DTI参数映射。此外,我们的方法实现了扩散梯度方向的可变性,通过设置网络的最大输入通道数来实现。网络通过使用数据集从人类连接组计划和本地医院进行训练和测试。与其他高级tensor参数估计方法相比,FlexDTI成功实现了变量扩散encoding梯度的高质量Diffusion Tensor-derived变量。它提高了平均扩散率(PSNR)约10dB,相比之前的深度学习方法。结论:FlexDTI可以很好地学习扩散梯度方向信息,实现了通用DTI重建。这个网络能够同时考虑灵活性和重建质量。

Contrast-augmented Diffusion Model with Fine-grained Sequence Alignment for Markup-to-Image Generation

  • paper_url: http://arxiv.org/abs/2308.01147
  • repo_url: https://github.com/zgj77/fsacdm
  • paper_authors: Guojin Zhong, Jin Yuan, Pan Wang, Kailun Yang, Weili Guan, Zhiyong Li
  • for: 这篇论文是为了提高markup-to-image生成的性能而写的。
  • methods: 该论文提出了一种名为“增强对比扩散模型”(FSA-CDM)的新模型,该模型在markup-to-image生成中引入了对比性正例和负例,以提高性能。技术上,该模型采用了细致的交叉模式对接模块,以便更好地挖掘两种模式之间的序列相似性,从而学习Robust的特征表示。
  • results: 经验表明,提出的组件在四个不同领域的标准数据集上得到了显著的改进,相对于状态空间的表现提高约2%-12% DTW。
    Abstract The recently rising markup-to-image generation poses greater challenges as compared to natural image generation, due to its low tolerance for errors as well as the complex sequence and context correlations between markup and rendered image. This paper proposes a novel model named "Contrast-augmented Diffusion Model with Fine-grained Sequence Alignment" (FSA-CDM), which introduces contrastive positive/negative samples into the diffusion model to boost performance for markup-to-image generation. Technically, we design a fine-grained cross-modal alignment module to well explore the sequence similarity between the two modalities for learning robust feature representations. To improve the generalization ability, we propose a contrast-augmented diffusion model to explicitly explore positive and negative samples by maximizing a novel contrastive variational objective, which is mathematically inferred to provide a tighter bound for the model's optimization. Moreover, the context-aware cross attention module is developed to capture the contextual information within markup language during the denoising process, yielding better noise prediction results. Extensive experiments are conducted on four benchmark datasets from different domains, and the experimental results demonstrate the effectiveness of the proposed components in FSA-CDM, significantly exceeding state-of-the-art performance by about 2%-12% DTW improvements. The code will be released at https://github.com/zgj77/FSACDM.
    摘要 Recently, markup-to-image生成技术面临更大的挑战,主要是由于 markup 和生成图像之间的错误忍容度低,同时序列和上下文相互关系复杂。这篇论文提出了一种新的模型名为“对比增强扩散模型 with 细腻模式对齐”(FSA-CDM),该模型在 markup-to-image 生成中提高表现力。从技术角度来说,我们设计了一种细腻的交叉模式对齐模块,以便充分探索两种模式之间的序列相似性,从而学习强健的特征表示。此外,我们还提出了一种对比增强扩散模型,以直接探索正面和负面样本,以提高模型的优化目标。此外,我们还开发了一种上下文意识听力模块,以捕捉 markup 语言中的上下文信息,从而实现更好的噪声预测结果。我们在四个不同领域的四个标准数据集上进行了广泛的实验,实验结果表明,我们提出的组件在 FSA-CDM 中具有显著的效果,与当前最佳状态报告相比,提高 DTW 值约 2%-12%。代码将在 GitHub 上发布,链接为

UCDFormer: Unsupervised Change Detection Using a Transformer-driven Image Translation

  • paper_url: http://arxiv.org/abs/2308.01146
  • repo_url: https://github.com/zhu-xlab/ucdformer
  • paper_authors: Qingsong Xu, Yilei Shi, Jianhua Guo, Chaojun Ouyang, Xiao Xiang Zhu
  • For: 本文提出了一种基于域shift的无监督变化检测方法,以解决 remote sensing 图像中的季节和风格差异问题。* Methods: 本文提出了一种基于 transformer 的变换图像翻译模型,以及一种新的可靠像素提取模块。* Results: 实验结果表明,对不同的无监督变化任务,UCDFormer 比其他相关方法提高了更多于 12% 的κ乘数性能。此外,UCDFormer 在考虑大规模应用时对地震Triggered 山崩检测表现出色。代码可以在 \url{https://github.com/zhu-xlab/UCDFormer} 上获取。
    Abstract Change detection (CD) by comparing two bi-temporal images is a crucial task in remote sensing. With the advantages of requiring no cumbersome labeled change information, unsupervised CD has attracted extensive attention in the community. However, existing unsupervised CD approaches rarely consider the seasonal and style differences incurred by the illumination and atmospheric conditions in multi-temporal images. To this end, we propose a change detection with domain shift setting for remote sensing images. Furthermore, we present a novel unsupervised CD method using a light-weight transformer, called UCDFormer. Specifically, a transformer-driven image translation composed of a light-weight transformer and a domain-specific affinity weight is first proposed to mitigate domain shift between two images with real-time efficiency. After image translation, we can generate the difference map between the translated before-event image and the original after-event image. Then, a novel reliable pixel extraction module is proposed to select significantly changed/unchanged pixel positions by fusing the pseudo change maps of fuzzy c-means clustering and adaptive threshold. Finally, a binary change map is obtained based on these selected pixel pairs and a binary classifier. Experimental results on different unsupervised CD tasks with seasonal and style changes demonstrate the effectiveness of the proposed UCDFormer. For example, compared with several other related methods, UCDFormer improves performance on the Kappa coefficient by more than 12\%. In addition, UCDFormer achieves excellent performance for earthquake-induced landslide detection when considering large-scale applications. The code is available at \url{https://github.com/zhu-xlab/UCDFormer}
    摘要 Change detection (CD) by comparing two bi-temporal images is a crucial task in remote sensing. With the advantages of not requiring cumbersome labeled change information, unsupervised CD has attracted extensive attention in the community. However, existing unsupervised CD approaches rarely consider the seasonal and style differences incurred by the illumination and atmospheric conditions in multi-temporal images. To address this challenge, we propose a change detection with domain shift setting for remote sensing images. Furthermore, we present a novel unsupervised CD method using a light-weight transformer, called UCDFormer.Specifically, we propose a transformer-driven image translation composed of a light-weight transformer and a domain-specific affinity weight to mitigate domain shift between two images with real-time efficiency. After image translation, we can generate the difference map between the translated before-event image and the original after-event image. Then, we propose a novel reliable pixel extraction module to select significantly changed/unchanged pixel positions by fusing the pseudo change maps of fuzzy c-means clustering and adaptive threshold. Finally, we obtain a binary change map based on these selected pixel pairs and a binary classifier.Experimental results on different unsupervised CD tasks with seasonal and style changes demonstrate the effectiveness of the proposed UCDFormer. For example, compared with several other related methods, UCDFormer improves performance on the Kappa coefficient by more than 12%. In addition, UCDFormer achieves excellent performance for earthquake-induced landslide detection when considering large-scale applications. The code is available at \url{https://github.com/zhu-xlab/UCDFormer}.Here's the translation in Traditional Chinese:改变检测(CD)通过比较两个 би时间图像是远程感知中的关键任务。不需要繁琐的标注改变信息,无监督CD已经吸引了社区广泛的关注。然而,现有的无监督CD方法 rarely 考虑了多图像中的季节和风格变化。为解决这个挑战,我们提出了基于域Shift的改变检测方法 для远程感知图像。此外,我们还提出了一种基于轻量级 transformer 的无监督CD方法,called UCDFormer。具体来说,我们提出了一种基于 transformer 的图像翻译方法,包括一个轻量级 transformer 和域特定的相互作用权重。通过这种方法,我们可以在实时效率下 Mitigate 多图像之间的域Shift。图像翻译后,我们可以生成原始事件图像和转换后事件图像之间的差异图。然后,我们提出了一种基于 pseudo change map 的可靠像素提取模块,通过融合杂化 c-means 分 clustering 和 adaptive threshold 来选择significantly 改变/不改变的像素位置。最后,我们通过这些选择的像素对生成一个二进制改变地图,并使用一个二进制分类器。实验结果表明,提出的 UCDFormer 在不同的无监督CD任务中具有优秀表现。比如,相比其他相关方法,UCDFormer 在卡普朗公式上提高了性能超过 12%。此外,UCDFormer 在考虑大规模应用时对地震引起的山崩检测也具有出色的表现。代码可以在 \url{https://github.com/zhu-xlab/UCDFormer} 上获取。

Learning Fourier-Constrained Diffusion Bridges for MRI Reconstruction

  • paper_url: http://arxiv.org/abs/2308.01096
  • repo_url: https://github.com/icon-lab/fdb
  • paper_authors: Muhammad U. Mirza, Onat Dalmaz, Hasan A. Bedel, Gokberk Elmas, Yilmaz Korkmaz, Alper Gungor, Salman UH Dar, Tolga Çukur
  • for: 加速MRI重建
  • methods: 含有扩散推论的扩散桥
  • results: 超过状态艺法重建方法表现
    Abstract Recent years have witnessed a surge in deep generative models for accelerated MRI reconstruction. Diffusion priors in particular have gained traction with their superior representational fidelity and diversity. Instead of the target transformation from undersampled to fully-sampled data, common diffusion priors are trained to learn a multi-step transformation from Gaussian noise onto fully-sampled data. During inference, data-fidelity projections are injected in between reverse diffusion steps to reach a compromise solution within the span of both the diffusion prior and the imaging operator. Unfortunately, suboptimal solutions can arise as the normality assumption of the diffusion prior causes divergence between learned and target transformations. To address this limitation, here we introduce the first diffusion bridge for accelerated MRI reconstruction. The proposed Fourier-constrained diffusion bridge (FDB) leverages a generalized process to transform between undersampled and fully-sampled data via random noise addition and random frequency removal as degradation operators. Unlike common diffusion priors that use an asymptotic endpoint based on Gaussian noise, FDB captures a transformation between finite endpoints where the initial endpoint is based on moderate degradation of fully-sampled data. Demonstrations on brain MRI indicate that FDB outperforms state-of-the-art reconstruction methods including conventional diffusion priors.
    摘要 近年来,深度生成模型在加速MRI重建方面得到了广泛应用。尤其是diffusion prior在其代表质量和多样性方面表现出色,因此得到了广泛应用。而不是目标变换从下采样到全样本数据,通用的diffusion prior通常是在从 Gaussian 噪声到全样本数据的多步变换上学习。在推理过程中,通过在反 diffusion 步骤中注入数据准确性投影来实现一个妥协解决方案,以达到在 diffusion prior 和成像运算符之间的妥协。然而,由于噪声假设导致的异常情况,这些解决方案可能不是最佳的。为此,我们在这里引入了首个加速MRI重建的扩展噪声桥(FDB)。提案的FDB利用一种扩展过程来将下采样数据转换成全样本数据,通过随机噪声添加和随机频率移除来实现噪声 Bridge 效果。与常见的扩展假设不同,FDB捕捉了一种将限定端点转换为有限端点,其初始端点基于部分降低全样本数据的噪声。Brain MRI 示例表明,FDB可以比州时间-平衡方法和常见重建方法更好地重建图像。

Push the Boundary of SAM: A Pseudo-label Correction Framework for Medical Segmentation

  • paper_url: http://arxiv.org/abs/2308.00883
  • repo_url: None
  • paper_authors: Ziyi Huang, Hongshan Liu, Haofeng Zhang, Fuyong Xing, Andrew Laine, Elsa Angelini, Christine Hendon, Yu Gan
  • for: 本研究旨在提高零样本学习 segmentation 的性能,特别是在医学影像 segmentation 领域, где 注意力点和专业知识要求较高。
  • methods: 本研究使用 Segment anything model (SAM),并提出了一种新的标签损害方法来提高 SAM 基于的 segmentation 性能。该方法通过检测标签的噪声来分 distinguish between clean labels and noisy labels,然后使用一种自适应 correction 模块来更正噪声标签,最终使用更新后的标签进行网络重新训练。
  • results: 研究结果表明,提出的方法可以在 X-ray 和肺 CT 数据集上提高 segmentation 精度,并超过基eline 方法在标签更正方面。
    Abstract Segment anything model (SAM) has emerged as the leading approach for zero-shot learning in segmentation, offering the advantage of avoiding pixel-wise annotation. It is particularly appealing in medical image segmentation where annotation is laborious and expertise-demanding. However, the direct application of SAM often yields inferior results compared to conventional fully supervised segmentation networks. While using SAM generated pseudo label could also benefit the training of fully supervised segmentation, the performance is limited by the quality of pseudo labels. In this paper, we propose a novel label corruption to push the boundary of SAM-based segmentation. Our model utilizes a novel noise detection module to distinguish between noisy labels from clean labels. This enables us to correct the noisy labels using an uncertainty-based self-correction module, thereby enriching the clean training set. Finally, we retrain the network with updated labels to optimize its weights for future predictions. One key advantage of our model is its ability to train deep networks using SAM-generated pseudo labels without relying on a subset of expert-level annotations. We demonstrate the effectiveness of our proposed model on both X-ray and lung CT datasets, indicating its ability to improve segmentation accuracy and outperform baseline methods in label correction.
    摘要 对于零条件学习分类 зада目标(SAM)已经成为领先的方法,它可以避免像Pixel-wise的标签。尤其在医疗影像分类中,标签是劳动密集且需要专业知识。然而,直接应用SAM经常会导致比于传统完全supervised分类网络较差的结果。使用SAM生成的伪标签也可以帮助完全supervised分类网络的训练,但是结果受到伪标签质量的限制。在这篇文章中,我们提出了一个新的标签损坏措施,我们的模型使用了一个新的噪音探测模组来分辨噪音标签和清洁标签。这使得我们可以通过uncertainty-based自我更正模组来更正噪音标签,从而丰富清洁训练集。最后,我们重新训练网络使用更新的标签,以便在未来预测中优化网络的 Parameters。我们的模型的一个关键优势是它可以透过SAM生成的伪标签进行深度网络的训练,不需要专业水平的标签。我们在X-ray和肺CT数据集上显示了我们的提案的效果,证明了它可以提高分类精度和超越基eline方法。

Decomposition Ascribed Synergistic Learning for Unified Image Restoration

  • paper_url: http://arxiv.org/abs/2308.00759
  • repo_url: None
  • paper_authors: Jinghao Zhang, Jie Huang, Man Zhou, Chongyi Li, Feng Zhao
  • for: 这篇论文旨在学习多种图像异常情况的纠正,以便在实际应用中更有效地处理图像。
  • methods: 该论文基于Singular Value Decomposition (SVD)的分析,通过将不同类型的图像异常情况分解成两类:singular vector dominated和singular value dominated,从而更好地利用不同类型的异常情况之间的关系,进而提高图像纠正的效果。
  • results: 实验结果表明,该方法在混合五种图像纠正任务中表现出色,包括雨晕图像纠正、霜点图像纠正、噪点图像纠正、抖擦图像纠正和低光照图像提高。
    Abstract Learning to restore multiple image degradations within a single model is quite beneficial for real-world applications. Nevertheless, existing works typically concentrate on regarding each degradation independently, while their relationship has been less exploited to ensure the synergistic learning. To this end, we revisit the diverse degradations through the lens of singular value decomposition, with the observation that the decomposed singular vectors and singular values naturally undertake the different types of degradation information, dividing various restoration tasks into two groups,\ie, singular vector dominated and singular value dominated. The above analysis renders a more unified perspective to ascribe the diverse degradations, compared to previous task-level independent learning. The dedicated optimization of degraded singular vectors and singular values inherently utilizes the potential relationship among diverse restoration tasks, attributing to the Decomposition Ascribed Synergistic Learning (DASL). Specifically, DASL comprises two effective operators, namely, Singular VEctor Operator (SVEO) and Singular VAlue Operator (SVAO), to favor the decomposed optimization, which can be lightly integrated into existing convolutional image restoration backbone. Moreover, the congruous decomposition loss has been devised for auxiliary. Extensive experiments on blended five image restoration tasks demonstrate the effectiveness of our method, including image deraining, image dehazing, image denoising, image deblurring, and low-light image enhancement.
    摘要 Translation notes:* "degradations" in the original text is translated as "质量下降" (quality downgrade) in Simplified Chinese, which is a more common term used in image processing tasks.* "singular value decomposition" is translated as "特征值分解" (feature value decomposition) in Simplified Chinese, which is a more direct translation of the original term.* "decomposed singular vectors" is translated as "分解的特征向量" (decomposed feature vectors) in Simplified Chinese, and "decomposed singular values" is translated as "分解的特征值" (decomposed feature values).* "task-level independent learning" is translated as "独立学习" (independent learning) in Simplified Chinese, which is a more direct translation of the original term.* "decomposition ascribed synergistic learning" is translated as "分解归一化学习" (decomposition unified learning) in Simplified Chinese, which is a more direct translation of the original term.* "singular vector operator" and "singular value operator" are translated as "特征向量操作器" (feature vector operator) and "特征值操作器" (feature value operator) in Simplified Chinese, respectively.* "congruous decomposition loss" is translated as "相同分解损失" (same decomposition loss) in Simplified Chinese, which is a more direct translation of the original term.

Phase Diverse Phase Retrieval for Microscopy: Comparison of Gaussian and Poisson Approaches

  • paper_url: http://arxiv.org/abs/2308.00734
  • repo_url: https://github.com/nikolajreiser/poissonphasediversity
  • paper_authors: Nikolaj Reiser, Min Guo, Hari Shroff, Patrick J. La Riviere
  • for: 这项研究旨在提高微镜像系统中的宽场瑕疵补做,并比较 Gaussian 和 Poisson 两种模型在微镜中的性能。
  • methods: 这项研究使用多张图像来估计微镜系统的 pupil 平面 phase 瑕疵,并解决优化问题来实现瑕疵补做。
  • results: 研究发现,Poisson 模型在各种情况下与 Gaussian 模型匹配或超越它,并且在低光强情况下表现更好。Poisson 算法也更鲁棒于空间不变瑕疵和相位噪声的影响。最后,研究比较了使用瑕疵补做和使用瑕疵扩散函数的扩散来实现图像更加清晰。
    Abstract Phase diversity is a widefield aberration correction method that uses multiple images to estimate the phase aberration at the pupil plane of an imaging system by solving an optimization problem. This estimated aberration can then be used to deconvolve the aberrated image or to reacquire it with aberration corrections applied to a deformable mirror. The optimization problem for aberration estimation has been formulated for both Gaussian and Poisson noise models but the Poisson model has never been studied in microscopy nor compared with the Gaussian model. Here, the Gaussian- and Poisson-based estimation algorithms are implemented and compared for widefield microscopy in simulation. The Poisson algorithm is found to match or outperform the Gaussian algorithm in a variety of situations, and converges in a similar or decreased amount of time. The Gaussian algorithm does perform better in low-light regimes when image noise is dominated by additive Gaussian noise. The Poisson algorithm is also found to be more robust to the effects of spatially variant aberration and phase noise. Finally, the relative advantages of re-acquisition with aberration correction and deconvolution with aberrated point spread functions are compared.
    摘要 “phas diversity是一种广场修正方法,使用多张图像来估算光学系统的相位偏移在 pupil 平面,通过解决优化问题。这个估算的偏移可以用来恢复 corrected 图像或者应用到可变 curvature 镜中。对于 Gaussian 和 Poisson 噪声模型,优化问题的解决方法已经被研究,但是 Poisson 模型从未在 Mikroskop 中被研究过,也从未与 Gaussian 模型进行比较。在这里,我们实现了 Gaussian- 和 Poisson-based 估算算法,并对宽场 Mikroskop 进行了模拟比较。Poisson 算法在多种情况下匹配或超越 Gaussian 算法,并在相同或更短的时间内 converges。Gaussian 算法在低光度条件下,当图像噪声主要是 additive Gaussian 噪声时,表现较好。Poisson 算法也较为具有修饰抗性和相位噪声的效果。最后,对于恢复 corrected 图像和使用偏移扩展点阵的扩展 PSF 进行了比较。”