eess.IV - 2023-07-02

A multi-task learning framework for carotid plaque segmentation and classification from ultrasound images

  • paper_url: http://arxiv.org/abs/2307.00583
  • repo_url: None
  • paper_authors: Haitao Gan, Ran Zhou, Yanghan Ou, Furong Wang, Xinyao Cheng, Xiaoyan Wu, Aaron Fenster
  • for: 本研究的目的是提出一种多任务学习框架,用于 ultrasound 脉搏凝固板分类和 segmentation,以利用这两个任务之间的相关性。
  • methods: 该方法使用了一个区域权重模块 (RWM) 和一个样本权重模块 (SWM),以利用分类任务中的区域预知知识,并通过学习样本权重来提高分类和 segmentation 的性能。
  • results: 实验结果表明,提出的方法可以significantly提高与单任务网络相比的性能,包括分类精度为 85.82% 和 segmentation 的 Dice 相似度为 84.92%。
    Abstract Carotid plaque segmentation and classification play important roles in the treatment of atherosclerosis and assessment for risk of stroke. Although deep learning methods have been used for carotid plaque segmentation and classification, most focused on a single task and ignored the relationship between the segmentation and classification of carotid plaques. Therefore, we propose a multi-task learning framework for ultrasound carotid plaque segmentation and classification, which utilizes a region-weight module (RWM) and a sample-weight module (SWM) to exploit the correlation between these two tasks. The RWM provides a plaque regional prior knowledge to the classification task, while the SWM is designed to learn the categorical sample weight for the segmentation task. A total of 1270 2D ultrasound images of carotid plaques were collected from Zhongnan Hospital (Wuhan, China) for our experiments. The results of the experiments showed that the proposed method can significantly improve the performance compared to existing networks trained for a single task, with an accuracy of 85.82% for classification and a Dice similarity coefficient of 84.92% for segmentation. In the ablation study, the results demonstrated that both the designed RWM and SWM were beneficial in improving the network's performance. Therefore, we believe that the proposed method could be useful for carotid plaque analysis in clinical trials and practice.
    摘要 卡罗提脂板分割和分类在脉络疾病治疗和风险评估中发挥重要作用。虽然深度学习方法已经用于卡罗提脂板分割和分类,但大多数方法都专注于单一任务,忽略了这两个任务之间的关系。因此,我们提出了一种多任务学习框架 для脉络卡罗提脂板分割和分类,该框架利用区域权重模块(RWM)和样本权重模块(SWM)来利用这两个任务之间的相关性。RWM提供了脉络内分泌区域的知识,以便分类任务中的识别,而SWM是为分割任务学习样本权重。我们在 Zhongnan Hospital(武汉中南医院)收集了1270个2D脉络卡罗提脂板图像进行实验。实验结果表明,我们提出的方法可以明显提高与已有网络单任务培训的性能,具体数据为85.82%的分类精度和84.92%的分割同步率。在减少研究中,结果表明,设计的RWM和SWM都对网络性能的提高做出了贡献。因此,我们认为,我们的方法可以在临床试验和实践中用于脉络卡罗提脂板分析。

Enhancing Super-Resolution Networks through Realistic Thick-Slice CT Simulation

  • paper_url: http://arxiv.org/abs/2307.10182
  • repo_url: None
  • paper_authors: Zeyu Tang, Xiaodan Xing, Guang Yang
  • for: 这项研究旨在开发一种新的快速CT图像生成算法,以便生成与实际图像更加相似的厚片CT图像。
  • methods: 该研究使用了Peak Signal-to-Noise Ratio (PSNR)和Root Mean Square Error (RMSE)指标来评估提议的算法,并发现该算法能够提供更加与实际图像相似的图像。
  • results: 该研究表明,使用提议的算法可以获得较高的PSNR和较低的RMSE,并且生成的图像更加与实际图像相似。
    Abstract This study aims to develop and evaluate an innovative simulation algorithm for generating thick-slice CT images that closely resemble actual images in the AAPM-Mayo's 2016 Low Dose CT Grand Challenge dataset. The proposed method was evaluated using Peak Signal-to-Noise Ratio (PSNR) and Root Mean Square Error (RMSE) metrics, with the hypothesis that our simulation would produce images more congruent with their real counterparts. Our proposed method demonstrated substantial enhancements in terms of both PSNR and RMSE over other simulation methods. The highest PSNR values were obtained with the proposed method, yielding 49.7369 $\pm$ 2.5223 and 48.5801 $\pm$ 7.3271 for D45 and B30 reconstruction kernels, respectively. The proposed method also registered the lowest RMSE with values of 0.0068 $\pm$ 0.0020 and 0.0108 $\pm$ 0.0099 for D45 and B30, respectively, indicating a distribution more closely aligned with the authentic thick-slice image. Further validation of the proposed simulation algorithm was conducted using the TCIA LDCT-and-Projection-data dataset. The generated images were then leveraged to train four distinct super-resolution (SR) models, which were subsequently evaluated using the real thick-slice images from the 2016 Low Dose CT Grand Challenge dataset. When trained with data produced by our novel algorithm, all four SR models exhibited enhanced performance.
    摘要 Translated into Simplified Chinese:这项研究的目的是开发和评估一种创新的thick-slice CT图像生成算法,以便更加准确地模拟实际图像在AAPM-Mayo的2016年低剂量CT挑战数据集中。提出的方法使用PSNR和RMSE度量来评估,假设我们的生成算法可以生成更加与实际图像相似的图像。我们的提出方法在PSNR和RMSE上都达到了substantial提高,相比其他生成方法。我们的方法在D45和B30重建器中获得了最高PSNR值,具体值为49.7369 ± 2.5223和48.5801 ± 7.3271。我们的方法还在D45和B30重建器中 регистрирова了最低RMSE值,具体值为0.0068 ± 0.0020和0.0108 ± 0.0099。这表明我们的方法生成的图像更加与实际图像相似。我们进一步验证了我们的生成算法使用TCIA LDCT-and-Projection-data dataset。生成的图像然后被用来训练四种不同的super-resolution(SR)模型,并在2016年低剂量CT挑战数据集中使用实际thick-slice图像进行评估。当使用我们的新算法生成数据时,所有四种SR模型均展现出了改进的性能。

ARHNet: Adaptive Region Harmonization for Lesion-aware Augmentation to Improve Segmentation Performance

  • paper_url: http://arxiv.org/abs/2307.01220
  • repo_url: https://github.com/king-haw/arhnet
  • paper_authors: Jiayu Huo, Yang Liu, Xi Ouyang, Alejandro Granados, Sebastien Ourselin, Rachel Sparks
  • for: 提供更好的脑损害诊断和 neuromonitoring 服务
  • methods: 使用增强的数据增强技术和自适应区域协调模块
  • results: 提高 segmentation 性能,在真实和 sintetic 图像上达到最佳效果,代码公开在 GitHubHere’s the breakdown of each point:
  • for: The paper is written for providing better diagnosis and neurological monitoring services by accurately segmenting brain lesions in MRI scans.
  • methods: The paper proposes a foreground harmonization framework (ARHNet) that uses advanced data augmentation and an Adaptive Region Harmonization (ARH) module to dynamically align foreground feature maps to the background with an attention mechanism.
  • results: The paper demonstrates the effectiveness of ARHNet in improving segmentation performance using real and synthetic images, and outperforms other methods for image harmonization tasks. The code is publicly available on GitHub.
    Abstract Accurately segmenting brain lesions in MRI scans is critical for providing patients with prognoses and neurological monitoring. However, the performance of CNN-based segmentation methods is constrained by the limited training set size. Advanced data augmentation is an effective strategy to improve the model's robustness. However, they often introduce intensity disparities between foreground and background areas and boundary artifacts, which weakens the effectiveness of such strategies. In this paper, we propose a foreground harmonization framework (ARHNet) to tackle intensity disparities and make synthetic images look more realistic. In particular, we propose an Adaptive Region Harmonization (ARH) module to dynamically align foreground feature maps to the background with an attention mechanism. We demonstrate the efficacy of our method in improving the segmentation performance using real and synthetic images. Experimental results on the ATLAS 2.0 dataset show that ARHNet outperforms other methods for image harmonization tasks, and boosts the down-stream segmentation performance. Our code is publicly available at https://github.com/King-HAW/ARHNet.
    摘要 优先级段落:精准分割脑部损害的MRI扫描图像是诊断和脑科监测中非常重要的。然而,基于Convolutional Neural Network(CNN)的分割方法的性能受训练集大小的限制。高级数据增强是一种有效的策略来提高模型的鲁棒性。然而,它们通常会导致背景和前景区域之间的明暗差异和边缘artefacts,这会削弱这些策略的效果。在这篇论文中,我们提出了一种前景协调框架(ARHNet)来解决明暗差异和Synthetic图像的真实性。特别是,我们提出了一种适应区域协调(ARH)模块,通过注意力机制来动态对前景特征图与背景进行对齐。我们通过实验表明,ARHNet可以提高下游分割性能,并在ATLAS 2.0 dataset上超过其他图像协调任务的方法。我们的代码公开在GitHub上,请参考https://github.com/King-HAW/ARHNet。

Domain Transfer Through Image-to-Image Translation for Uncertainty-Aware Prostate Cancer Classification

  • paper_url: http://arxiv.org/abs/2307.00479
  • repo_url: None
  • paper_authors: Meng Zhou, Amoon Jamzad, Jason Izard, Alexandre Menard, Robert Siemens, Parvin Mousavi
  • for: 这个研究是为了提高肝癌诊断的精度和效率,使用深度学习模型来支持医生在诊断过程中。
  • methods: 这个研究使用了对照式图像转换方法,将3.0T MRI图像转换为1.5T MRI图像,以增加训练数据的量。还使用了证据深度学习方法来估计模型的不确定性,并运用数据范围技术来筛选训练数据。最后,这个研究引入了证据类型单元损失,将类型单元损失与证据不确定性结合以训练模型。
  • results: 这个研究的结果显示,使用对照式图像转换方法和证据深度学习方法可以提高肝癌诊断的精度,AUC值提高了20%以上(98.4% vs. 76.2%)。这些结果显示,提供预测不确定性可能会帮助医生更好地处理不确定的案例,并且更快地完成诊断过程。
    Abstract Prostate Cancer (PCa) is often diagnosed using High-resolution 3.0 Tesla(T) MRI, which has been widely established in clinics. However, there are still many medical centers that use 1.5T MRI units in the actual diagnostic process of PCa. In the past few years, deep learning-based models have been proven to be efficient on the PCa classification task and can be successfully used to support radiologists during the diagnostic process. However, training such models often requires a vast amount of data, and sometimes it is unobtainable in practice. Additionally, multi-source MRIs can pose challenges due to cross-domain distribution differences. In this paper, we have presented a novel approach for unpaired image-to-image translation of prostate mp-MRI for classifying clinically significant PCa, to be applied in data-constrained settings. First, we introduce domain transfer, a novel pipeline to translate unpaired 3.0T multi-parametric prostate MRIs to 1.5T, to increase the number of training data. Second, we estimate the uncertainty of our models through an evidential deep learning approach; and leverage the dataset filtering technique during the training process. Furthermore, we introduce a simple, yet efficient Evidential Focal Loss that incorporates the focal loss with evidential uncertainty to train our model. Our experiments demonstrate that the proposed method significantly improves the Area Under ROC Curve (AUC) by over 20% compared to the previous work (98.4% vs. 76.2%). We envision that providing prediction uncertainty to radiologists may help them focus more on uncertain cases and thus expedite the diagnostic process effectively. Our code is available at https://github.com/med-i-lab/DT_UE_PCa
    摘要 丙级尿道癌(PCa)经常通过高分辨率3.0T MRI进行诊断,但是医疗机构中仍有许多使用1.5T MRI单元进行诊断过程。过去几年,深度学习基于模型已经在PCa分类任务上证明效果良好,可以为医生提供支持。然而,训练这些模型通常需要巨量数据,而且在实践中可能无法获得。此外,多源MRIs可能会产生交叉领域分布差异。在这篇论文中,我们提出了一种新的方法,用于无拟合的图像到图像翻译,以便在数据紧张的情况下对丙级尿道癌进行分类。首先,我们引入域传递,一种新的管道,用于将3.0T多参量尿道MRIs翻译成1.5T,以增加训练数据的数量。其次,我们通过证明深度学习方法来估计模型的uncertainty;并在训练过程中运用数据筛选技术。此外,我们引入了一种简单 yet efficient的证明焦点损失,并将其与证明uncertainty相结合,以训练我们的模型。我们的实验表明,我们的方法可以提高ROC曲线面积(AUC)比前一个工作(98.4% vs. 76.2%)。我们认为,为医生提供预测不确定性可能会帮助他们更好地关注不确定的案例,从而更有效地快速诊断。我们的代码可以在https://github.com/med-i-lab/DT_UE_PCa中找到。

Weighted Anisotropic-Isotropic Total Variation for Poisson Denoising

  • paper_url: http://arxiv.org/abs/2307.00439
  • repo_url: https://github.com/kbui1993/official_aitv_poisson_denoising
  • paper_authors: Kevin Bui, Yifei Lou, Fredrick Park, Jack Xin
  • for: 这篇研究旨在提出一种基于weighted anisotropic-isotropic total variation(AITV)的Poisson噪声除除法,以提高图像质量和计算效率。
  • methods: 该研究使用了一种基于替换方法的多值函数,并使用了一种组合 proximal 算法和权重补做法来实现。
  • results: 数值实验表明,该算法比其他Poisson噪声除除法具有更高的图像质量和计算效率。
    Abstract Poisson noise commonly occurs in images captured by photon-limited imaging systems such as in astronomy and medicine. As the distribution of Poisson noise depends on the pixel intensity value, noise levels vary from pixels to pixels. Hence, denoising a Poisson-corrupted image while preserving important details can be challenging. In this paper, we propose a Poisson denoising model by incorporating the weighted anisotropic-isotropic total variation (AITV) as a regularization. We then develop an alternating direction method of multipliers with a combination of a proximal operator for an efficient implementation. Lastly, numerical experiments demonstrate that our algorithm outperforms other Poisson denoising methods in terms of image quality and computational efficiency.
    摘要 Poisson 噪声通常发生在由光子限制的捕捉系统中,如天文学和医学中的图像捕捉。由于噪声分布取决于像素INTENSITY值,噪声水平各像素不同,因此去噪化Poisson受损图像保持重要细节可谓挑战。在这篇论文中,我们提出了包含加重度权重iso-anisotropic total variation(AITV)的Poisson去噪模型。然后,我们开发了一种alternating direction method of multipliers,并使用 proximal operator 实现高效的实现。最后,数值实验表明,我们的算法在图像质量和计算效率方面都超过了其他Poisson去噪方法。

Sulcal Pattern Matching with the Wasserstein Distance

  • paper_url: http://arxiv.org/abs/2307.00385
  • repo_url: https://github.com/laplcebeltrami/sulcaltree
  • paper_authors: Zijian Chen, Soumya Das, Moo K. Chung
  • for: 该论文旨在提供一种统一的计算框架,用于模型人脑磁共振图像中的皱槽模式。
  • methods: 论文使用沃asserstein距离来非线性匹配皱槽模式,并开发了梯度下降算法来估计塑形场。
  • results: 该方法可以准确地识别男性和女性皱槽模式之间的差异。I hope that helps! Let me know if you have any other questions.
    Abstract We present the unified computational framework for modeling the sulcal patterns of human brain obtained from the magnetic resonance images. The Wasserstein distance is used to align the sulcal patterns nonlinearly. These patterns are topologically different across subjects making the pattern matching a challenge. We work out the mathematical details and develop the gradient descent algorithms for estimating the deformation field. We further quantify the image registration performance. This method is applied in identifying the differences between male and female sulcal patterns.
    摘要 我们提出了一个统一的计算框架,用于模拟人类大脑磁共振成像中的脑隙Pattern。我们使用沃asserstein距离来非线性匹配这些Pattern。由于这些Pattern在不同个体中具有不同的拓扑结构,因此匹配这些Pattern是一个挑战。我们在详细的数学上下文中详细介绍了这些方法,并开发了梯度下降算法来估计扭变场。我们进一步评估了图像匹配性。这种方法可以用于对男女脑隙Pattern之间的差异进行识别。Here's the text with traditional Chinese characters:我们提出了一个统一的计算框架,用于模拟人类大脑磁共振成像中的脑隙Pattern。我们使用沃asserstein距离来非线性匹配这些Pattern。由于这些Pattern在不同个体中具有不同的拓扑结构,因此匹配这些Pattern是一个挑战。我们在详细的数学上下文中详细介绍了这些方法,并开发了梯度下降算法来估计扭变场。我们进一步评估了图像匹配性。这种方法可以用于对男女脑隙Pattern之间的差异进行识别。