eess.IV - 2023-11-17

Virtual trajectories for I-24 MOTION: data and tools

  • paper_url: http://arxiv.org/abs/2311.10888
  • repo_url: None
  • paper_authors: Junyi Ji, Yanbing Wang, Derek Gloudemans, Gergely Zachár, William Barbour, Daniel B. Work
  • for: 这篇论文旨在提供一个基于I-24 MOTION INCEPTION v1.0.0数据集的虚拟轨迹数据集,以解决大规模 yet noisy 轨迹数据集的分析挑战。
  • methods: 该论文提供了一种基于Python的虚拟轨迹生成实现方式,可以将大规模 raw 数据集转化为虚拟轨迹,以便更好地进行分析。
  • results: 研究人员通过使用虚拟轨迹数据集,成功地评估了I-24 MOTION INCEPTION v1.0.0数据集中不同车道之间的速度变化和行驶时间。此外,虚拟轨迹数据集还开启了未来对交通波动的研究。
    Abstract This article introduces a new virtual trajectory dataset derived from the I-24 MOTION INCEPTION v1.0.0 dataset to address challenges in analyzing large but noisy trajectory datasets. Building on the concept of virtual trajectories, we provide a Python implementation to generate virtual trajectories from large raw datasets that are typically challenging to process due to their size. We demonstrate the practical utility of these trajectories in assessing speed variability and travel times across different lanes within the INCEPTION dataset. The virtual trajectory dataset opens future research on traffic waves and their impact on energy.
    摘要 Note: The Simplified Chinese translation is written in Traditional Chinese characters, which is the standard form of Chinese used in Taiwan and other countries.Here's the breakdown of the translation:1. "This article" is translated as "这篇文章" (zhè běn wén tiāng).2. "introduces" is translated as "介绍" (jiè jiǎo).3. "a new virtual trajectory dataset" is translated as "一个新的虚拟路径集" (yī gè xīn de hū yì lù fāng jī).4. "derived from the I-24 MOTION INCEPTION v1.0.0 dataset" is translated as "基于I-24 MOTION INCEPTION v1.0.0 dataset" (jī yú I-24 MOTION INCEPTION v1.0.0 jiāng dài).5. "to address challenges in analyzing large but noisy trajectory datasets" is translated as "用于处理大小噪嗤的路径集数据" (yòng yú chūng hòu dà xiǎo bīng de lù fāng jī shuō yì).6. "Building on the concept of virtual trajectories" is translated as "基于虚拟路径的概念" (jī yú hū yì lù fāng de guī yì).7. "we provide a Python implementation" is translated as "我们提供Python实现" (wǒ men tí huì Python shí jì).8. "to generate virtual trajectories from large raw datasets" is translated as "生成虚拟路径从大Raw数据" (shēng jì hū yì lù fāng jiǔ raw shuō yì).9. "that are typically challenging to process due to their size" is translated as "因其大小而具有挑战性" (yǐn qí dà xiǎo èr bù huì yǒu zhàng xìng).10. "We demonstrate the practical utility of these trajectories" is translated as "我们示出这些虚拟路径的实用性" (wǒ men shì chuī zhè xī hū yì lù fāng de shí yòng xìng).11. "in assessing speed variability and travel times across different lanes within the INCEPTION dataset" is translated as "在INCEPTION dataset中的不同车道之间的速度变化和旅行时间" (zhī dào zhè yì zhè yì shuāng dào zhī jiān de zhōng dào biàn huà hěn lǚ xíng shí).12. "The virtual trajectory dataset opens future research on traffic waves and their impact on energy" is translated as "虚拟路径集开启了交通波和能源的未来研究" (hū yì lù fāng jī kāi kě yì jiāo yì yǐn yuè yì).Note that the translation is written in Simplified Chinese, which is the standard form of Chinese used in mainland China and other countries. Traditional Chinese is used in Taiwan and other countries.

SDDPM: Speckle Denoising Diffusion Probabilistic Models

  • paper_url: http://arxiv.org/abs/2311.10868
  • repo_url: None
  • paper_authors: Soumee Guha, Scott T. Acton
  • for: 这 paper 是为了提出一种新的图像去噪算法,用于去除基于信号的乱噪。
  • methods: 该算法使用了扩散模型来除去信号依赖的乱噪。
  • results: 该paper的实验表明,该算法在不同的噪声水平下表现出色,与比较模型相比,其表现更加稳定和高效。
    Abstract Coherent imaging systems, such as medical ultrasound and synthetic aperture radar (SAR), are subject to corruption from speckle due to sub-resolution scatterers. Since speckle is multiplicative in nature, the constituent image regions become corrupted to different extents. The task of denoising such images requires algorithms specifically designed for removing signal-dependent noise. This paper proposes a novel image denoising algorithm for removing signal-dependent multiplicative noise with diffusion models, called Speckle Denoising Diffusion Probabilistic Models (SDDPM). We derive the mathematical formulations for the forward process, the reverse process, and the training objective. In the forward process, we apply multiplicative noise to a given image and prove that the forward process is Gaussian. We show that the reverse process is also Gaussian and the final training objective can be expressed as the Kullback Leibler (KL) divergence between the forward and reverse processes. As derived in the paper, the final denoising task is a single step process, thereby reducing the denoising time significantly. We have trained our model with natural land-use images and ultrasound images for different noise levels. Extensive experiments centered around two different applications show that SDDPM is robust and performs significantly better than the comparative models even when the images are severely corrupted.
    摘要 高度一致的干扰系统,如医疗超声和 sintetical aperture radar(SAR),受到微小雷达的扰动,导致图像受到不同程度的损害。由于干扰是加法的,图像中的不同区域受到不同程度的损害。去除这种信号依赖的干扰需要特定的算法。本文提出了一种新的图像去干扰算法,基于扩散模型,称为Speckle Denoising Diffusion Probabilistic Models(SDDPM)。我们 derivate了前向过程、反向过程和训练目标的数学表述。在前向过程中,我们将一个图像加上了加法干扰,并证明了前向过程是高斯分布。我们还证明了反向过程也是高斯分布,最终的训练目标可以表示为高斯差分(KL)散度 между前向和反向过程。根据文章中的 derivation,最终的去干扰任务是单步过程,因此可以减少去干扰时间。我们在不同的陆地使用图像和超声图像上训练了我们的模型,并在不同的噪声水平进行了广泛的实验。结果表明,SDDPM是稳定和高效的,即使图像受到严重的损害也能够达到优秀的去干扰效果。

Image-Domain Material Decomposition for Dual-energy CT using Unsupervised Learning with Data-fidelity Loss

  • paper_url: http://arxiv.org/abs/2311.10641
  • repo_url: None
  • paper_authors: Junbo Peng, Chih-Wei Chang, Huiqiao Xie, Richard L. J. Qiu, Justin Roper, Tonghe Wang, Beth Bradshaw, Xiangyang Tang, Xiaofeng Yang
  • for: 这个研究的目的是发展一个不受训练数据的架构,用于静止影像领域内的材料分解。
  • methods: 这个研究使用了一个无监督学习架构,并且使用了数据量度的一致性来进行图像领域内的材料分解。
  • results: 研究获得了一个可靠的、不受噪声扰乱的材料分解方法,并且在静止影像领域内进行了实验评估。
    Abstract Background: Dual-energy CT (DECT) and material decomposition play vital roles in quantitative medical imaging. However, the decomposition process may suffer from significant noise amplification, leading to severely degraded image signal-to-noise ratios (SNRs). While existing iterative algorithms perform noise suppression using different image priors, these heuristic image priors cannot accurately represent the features of the target image manifold. Although deep learning-based decomposition methods have been reported, these methods are in the supervised-learning framework requiring paired data for training, which is not readily available in clinical settings. Purpose: This work aims to develop an unsupervised-learning framework with data-measurement consistency for image-domain material decomposition in DECT.
    摘要 背景:双能CT(DECT)和材料分解在医学影像中扮演着重要的角色,但是分解过程可能会受到干扰的强化,导致影像信号噪声比(SNR)严重下降。现有的迭代算法在不同的图像假设上进行干扰抑制,但这些启发式图像假设无法准确地表示目标图像拓扑。虽然深度学习基于的分解方法已经报道,但这些方法需要训练用 paired 数据,这在临床 Settings 中并不可得。目的:本工作旨在开发一个无监督学习框架,以保证数据测量一致性,用于图像领域的材料分解在 DECT 中。

MIFA: Metadata, Incentives, Formats, and Accessibility guidelines to improve the reuse of AI datasets for bioimage analysis

  • paper_url: http://arxiv.org/abs/2311.10443
  • repo_url: None
  • paper_authors: eresa Zulueta-Coarasa, Florian Jug, Aastha Mathur, Josh Moore, Arrate Muñoz-Barrutia, Liviu Anita, Kola Babalola, Pete Bankhead, Perrine Gilloteaux, Nodar Gogoberidze, Martin Jones, Gerard J. Kleywegt, Paul Korir, Anna Kreshuk, Aybüke Küpcü Yoldaş, Luca Marconato, Kedar Narayan, Nils Norlin, Bugra Oezdemir, Jessica Riesterer, Norman Rzepka, Ugis Sarkans, Beatriz Serrano, Christian Tischer, Virginie Uhlmann, Vladimír Ulman, Matthew Hartley
  • for: 本研究的目的是提高生物图像分析和处理领域中人工智能方法的发展,通过提供高质量的标注图像数据来帮助训练和开发新方法。
  • methods: 本研究使用的方法包括将社区专家集结在一起举行工作坊,制定了关于数据格式、元数据、数据展示和分享的标准指南,以及鼓励生成新数据集的奖励。
  • results: 研究人员认为,基于MIFA(Metadata, Incentives, Formats, and Accessibility)指南的开发将加速生物图像分析领域中人工智能工具的发展,因为它将提高高质量的训练数据的获取和 reuse。
    Abstract Artificial Intelligence methods are powerful tools for biological image analysis and processing. High-quality annotated images are key to training and developing new methods, but access to such data is often hindered by the lack of standards for sharing datasets. We brought together community experts in a workshop to develop guidelines to improve the reuse of bioimages and annotations for AI applications. These include standards on data formats, metadata, data presentation and sharing, and incentives to generate new datasets. We are positive that the MIFA (Metadata, Incentives, Formats, and Accessibility) recommendations will accelerate the development of AI tools for bioimage analysis by facilitating access to high quality training data.
    摘要 人工智能技术是生物图像分析和处理中非常有力的工具。高质量的注释图像是训练和开发新方法的关键,但获取这些数据往往受到数据分享标准的限制。我们将社区专家集合在一起,开展工作室,制定了指南,以提高生物图像和注释的重用。这些指南包括数据格式、元数据、数据展示和分享、以及奖励新数据集的创造。我们对MIFA(元数据、激励、格式和可访问性)建议表示乐见,这将加速生物图像分析中AI工具的发展,并且促进高质量训练数据的获取。