results: 结果显示了提案的方法的有效性,可以正确地识别城市区域中的变化和不变化区域,并且证明GEE作为一个有效的云端解决方案,可以管理大量的卫星数据。Abstract
The aim of this work is to perform a multitemporal analysis using the Google Earth Engine (GEE) platform for the detection of changes in urban areas using optical data and specific machine learning (ML) algorithms. As a case study, Cairo City has been identified, in Egypt country, as one of the five most populous megacities of the last decade in the world. Classification and change detection analysis of the region of interest (ROI) have been carried out from July 2013 to July 2021. Results demonstrate the validity of the proposed method in identifying changed and unchanged urban areas over the selected period. Furthermore, this work aims to evidence the growing significance of GEE as an efficient cloud-based solution for managing large quantities of satellite data.
摘要
目的是使用Google Earth Engine(GEE)平台进行多时间分析,以探测城市区域的变化使用光学数据和专门的机器学习(ML)算法。作为案例研究,开罗城市在埃及国被选为全球最后一个十年内最为人口稠密的五个超大城市之一。从2013年7月至2021年7月的时间段内,对兴趣区域(ROI)进行分类和变化检测分析。结果表明提出的方法的有效性,可以正确地标识在选定时间段内发生变化和不发生变化的城市区域。此外,这项工作还旨在证明GEE在处理大量卫星数据方面的能效性。
Integration of Sentinel-1 and Sentinel-2 data for Earth surface classification using Machine Learning algorithms implemented on Google Earth Engine
results: 研究结果表明,在这种情况下,雷达和光学Remote探测提供了补偿信息,提高了地面覆盖分类的准确性。此外,这种研究也证明了Google Earth Engine平台在处理大量卫星数据方面的emerging角色。Abstract
In this study, Synthetic Aperture Radar (SAR) and optical data are both considered for Earth surface classification. Specifically, the integration of Sentinel-1 (S-1) and Sentinel-2 (S-2) data is carried out through supervised Machine Learning (ML) algorithms implemented on the Google Earth Engine (GEE) platform for the classification of a particular region of interest. Achieved results demonstrate how in this case radar and optical remote detection provide complementary information, benefiting surface cover classification and generally leading to increased mapping accuracy. In addition, this paper works in the direction of proving the emerging role of GEE as an effective cloud-based tool for handling large amounts of satellite data.
摘要
在这一研究中,人造干扰雷达(SAR)和光学数据都被考虑用于地球表面分类。特别是通过监控Google Earth Engine(GEE)平台上的超级vised机器学习(ML)算法,将Sentinel-1(S-1)和Sentinel-2(S-2)数据集成用于特定区域的分类。实现的结果表明,在这种情况下,雷达和光学远程探测提供了补偿信息,改善表面覆盖分类并通常导致增加地图准确性。此外,这篇论文也致力于证明GEE在处理大量卫星数据方面的emerging角色。
PCMC-T1: Free-breathing myocardial T1 mapping with Physically-Constrained Motion Correction
results: 相比基线方法,PCMC-T1显示出较高的模型适应质量(R2:0.955)和最高的临床影响(临床分数:3.93)。Abstract
T1 mapping is a quantitative magnetic resonance imaging (qMRI) technique that has emerged as a valuable tool in the diagnosis of diffuse myocardial diseases. However, prevailing approaches have relied heavily on breath-hold sequences to eliminate respiratory motion artifacts. This limitation hinders accessibility and effectiveness for patients who cannot tolerate breath-holding. Image registration can be used to enable free-breathing T1 mapping. Yet, inherent intensity differences between the different time points make the registration task challenging. We introduce PCMC-T1, a physically-constrained deep-learning model for motion correction in free-breathing T1 mapping. We incorporate the signal decay model into the network architecture to encourage physically-plausible deformations along the longitudinal relaxation axis. We compared PCMC-T1 to baseline deep-learning-based image registration approaches using a 5-fold experimental setup on a publicly available dataset of 210 patients. PCMC-T1 demonstrated superior model fitting quality (R2: 0.955) and achieved the highest clinical impact (clinical score: 3.93) compared to baseline methods (0.941, 0.946 and 3.34, 3.62 respectively). Anatomical alignment results were comparable (Dice score: 0.9835 vs. 0.984, 0.988). Our code and trained models are available at https://github.com/eyalhana/PCMC-T1.
摘要
T1映射是一种量化核磁共振成像(qMRI)技术,已经成为诊断散发性心肺疾病的重要工具。然而,现有的方法很多都是基于呼吸停止序列来消除呼吸颤动 artifacts。这限制了患者可以接受的范围和效果。图像 региSTRassen可以使得T1映射在呼吸自由状态下进行。然而,不同时刻的信号强度之间的本质差异使得注册任务变得困难。我们介绍PCMC-T1,一种基于物理约束的深度学习模型,用于呼吸自由T1映射的运动 correction。我们将信号衰减模型integrated into网络架构,以便强制实施物理可能的扭轴。我们与基线方法进行比较,使用公共可用的数据集上的5-fold实验设置,PCMC-T1显示出最高的模型适应质量(R2:0.955)和最高的临床影响(临床分数:3.93),与基线方法(0.941,0.946和3.34,3.62分别)相比。结构匹配结果相似(Dice分数:0.9835 vs. 0.984,0.988)。我们的代码和训练模型可以在GitHub上找到:https://github.com/eyalhana/PCMC-T1。
Validation of apparent intra-and extra-myocellular lipid content indicator using spiral spectroscopic imaging at 3T
results: 研究发现,螺旋MRSI方法可以快速和精确地映射肌肉中IMCL和EMCL的显示内容,并与经典评估结果相符。Abstract
This work presents a fast and simple method based on spiral MRSI for mapping the IMCL and EMCL apparent content, which is a challenging task and it compares this indicator to classical quantification results in muscles of interest.
摘要
Here's the text in Simplified Chinese:这个研究提出了一种快速简单的方法,基于螺旋MRSI测量IMCL和EMCL显示的内容,这是一项具有挑战性的任务,并与经典量化结果在关注的肌肉中进行比较。
Phase Aberration Correction: A Deep Learning-Based Aberration to Aberration Approach
for: correction of phase aberration in ultrasound imaging
methods: deep learning-based approach that does not require ground truth, using an adaptive mixed loss function with both B-mode and RF data
results: enhanced performance and more efficient convergence compared to using a conventional loss function such as mean square error, as demonstrated on a publicly released dataset of 161,701 single plane-wave images (RF data)Abstract
One of the primary sources of suboptimal image quality in ultrasound imaging is phase aberration. It is caused by spatial changes in sound speed over a heterogeneous medium, which disturbs the transmitted waves and prevents coherent summation of echo signals. Obtaining non-aberrated ground truths in real-world scenarios can be extremely challenging, if not impossible. This challenge hinders training of deep learning-based techniques' performance due to the presence of domain shift between simulated and experimental data. Here, for the first time, we propose a deep learning-based method that does not require ground truth to correct the phase aberration problem, and as such, can be directly trained on real data. We train a network wherein both the input and target output are randomly aberrated radio frequency (RF) data. Moreover, we demonstrate that a conventional loss function such as mean square error is inadequate for training such a network to achieve optimal performance. Instead, we propose an adaptive mixed loss function that employs both B-mode and RF data, resulting in more efficient convergence and enhanced performance. Finally, we publicly release our dataset, including 161,701 single plane-wave images (RF data). This dataset serves to mitigate the data scarcity problem in the development of deep learning-based techniques for phase aberration correction.
摘要
To address this challenge, we propose a deep learning-based method that does not require ground truth to correct phase aberration, and can be directly trained on real data. We train a network where the input and target output are randomly aberrated radio frequency (RF) data. However, a conventional loss function such as mean square error is inadequate for training such a network, and instead, we propose an adaptive mixed loss function that employs both B-mode and RF data, resulting in more efficient convergence and enhanced performance.To further support the development of deep learning-based techniques for phase aberration correction, we publicly release our dataset, which includes 161,701 single plane-wave images (RF data). This dataset serves to mitigate the data scarcity problem in the development of deep learning-based techniques for phase aberration correction.
Hey That’s Mine Imperceptible Watermarks are Preserved in Diffusion Generated Outputs
results: 可以通过统计测试确定模型是否训练使用水印数据,并且可以判断水印数据中特定特征的相关性。这些结果表明我们的系统可以保护在线内容的知识产权。Abstract
Generative models have seen an explosion in popularity with the release of huge generative Diffusion models like Midjourney and Stable Diffusion to the public. Because of this new ease of access, questions surrounding the automated collection of data and issues regarding content ownership have started to build. In this paper we present new work which aims to provide ways of protecting content when shared to the public. We show that a generative Diffusion model trained on data that has been imperceptibly watermarked will generate new images with these watermarks present. We further show that if a given watermark is correlated with a certain feature of the training data, the generated images will also have this correlation. Using statistical tests we show that we are able to determine whether a model has been trained on marked data, and what data was marked. As a result our system offers a solution to protect intellectual property when sharing content online.
摘要
<>通过帮助系统,将文本翻译成简化中文。<>生成模型在发布大量生成Diffusion模型之后,如Midjourney和Stable Diffusion,受到了广泛的关注。由于这些新的访问权限,人们开始思考自动收集数据的问题以及内容所有权问题。在这篇论文中,我们提出了一种保护内容的新方法。我们显示了一个基于不可见水印的生成Diffusion模型,可以在生成新图像时携带水印。我们还表明,如果一个水印与训练数据中某个特征相关,那么生成的图像也会具有这种相关性。通过统计测试,我们证明了我们可以确定一个模型是否基于水印数据,以及这些水印数据的具体内容。因此,我们的系统可以保护在线内容的知识产权。
Switched auxiliary loss for robust training of transformer models for histopathological image segmentation
for: 本研究旨在提高 transformers 模型在医学影像分析中 dense prediction 任务中的性能,并 Investigate the use of shifted auxiliary loss to overcome the diminishing gradient problem.
methods: 我们使用 HuBMAP + HPA - Hacking the Human Body competition dataset,并提出了shifted auxiliary loss来解决深度学习模型训练过程中的减速问题。
results: 我们的模型在公共数据集上取得了 dice 分数 0.793,在私有数据集上取得了 dice 分数 0.778,与传统方法相比增加了1%的改进。这些结果表明 transformers 模型在医学影像分析中的 dense prediction 任务中具有潜在的应用价值。Abstract
Functional tissue Units (FTUs) are cell population neighborhoods local to a particular organ performing its main function. The FTUs provide crucial information to the pathologist in understanding the disease affecting a particular organ by providing information at the cellular level. In our research, we have developed a model to segment multi-organ FTUs across 5 organs namely: the kidney, large intestine, lung, prostate and spleen by utilizing the HuBMAP + HPA - Hacking the Human Body competition dataset. We propose adding shifted auxiliary loss for training models like the transformers to overcome the diminishing gradient problem which poses a challenge towards optimal training of deep models. Overall, our model achieved a dice score of 0.793 on the public dataset and 0.778 on the private dataset and shows a 1% improvement with the use of the proposed method. The findings also bolster the use of transformers models for dense prediction tasks in the field of medical image analysis. The study assists in understanding the relationships between cell and tissue organization thereby providing a useful medium to look at the impact of cellular functions on human health.
摘要
功能组织单元(FTU)是指特定器官地区的细胞群聚,提供了病理学家理解器官疾病的重要信息。在我们的研究中,我们开发了一种方法来在5个器官(肾脏、大肠、肺、膀胱和脾脏)中的FTU进行多器官分割,使用了HuBMAP + HPA - Hacking the Human Body competition数据集。我们提议在训练模型时使用偏移 auxiliary loss,以解决深度模型训练过程中的减少梯度问题。总的来说,我们的模型在公共数据集上达到了0.793的 dice 分数,在私有数据集上达到了0.778的 dice 分数,与使用我们提议的方法相比增加了1%的改进。这些发现也推动了使用 transformers 模型进行密集预测任务在医疗影像分析领域。这项研究帮助我们理解细胞和组织组织之间的关系,并提供了一个有用的媒介来查看细胞功能对人类健康的影响。
Debiasing Counterfactuals In the Presence of Spurious Correlations
paper_authors: Amar Kumar, Nima Fathi, Raghav Mehta, Brennan Nichyporuk, Jean-Pierre R. Falet, Sotirios Tsaftaris, Tal Arbel for:This paper aims to improve the performance of deep learning models in medical imaging classification tasks by addressing the issue of spurious correlations in the training data.methods:The proposed method integrates two techniques: (1) popular debiasing classifiers such as distributionally robust optimization (DRO), and (2) counterfactual image generation.results:The proposed method is effective in learning generalizable markers across the population and ignoring spurious correlations. The novel metric, Spurious Correlation Latching Score (SCLS), is used to quantify the extent of classifier reliance on spurious correlations. Through comprehensive experiments on two public datasets with simulated and real visual artifacts, the method is shown to successfully ignore spurious correlations and focus on the underlying disease pathology.Abstract
Deep learning models can perform well in complex medical imaging classification tasks, even when basing their conclusions on spurious correlations (i.e. confounders), should they be prevalent in the training dataset, rather than on the causal image markers of interest. This would thereby limit their ability to generalize across the population. Explainability based on counterfactual image generation can be used to expose the confounders but does not provide a strategy to mitigate the bias. In this work, we introduce the first end-to-end training framework that integrates both (i) popular debiasing classifiers (e.g. distributionally robust optimization (DRO)) to avoid latching onto the spurious correlations and (ii) counterfactual image generation to unveil generalizable imaging markers of relevance to the task. Additionally, we propose a novel metric, Spurious Correlation Latching Score (SCLS), to quantify the extent of the classifier reliance on the spurious correlation as exposed by the counterfactual images. Through comprehensive experiments on two public datasets (with the simulated and real visual artifacts), we demonstrate that the debiasing method: (i) learns generalizable markers across the population, and (ii) successfully ignores spurious correlations and focuses on the underlying disease pathology.
摘要
深度学习模型可以在复杂的医疗影像分类任务中表现出色,即使基于杂乱关系(即外部因素)而不是真实的影像标记物。这会限制其在人口中的泛化能力。使用对抗面生成来暴露杂乱关系可以帮助解释,但不提供修正偏见的策略。在这项工作中,我们介绍了首个综合训练框架,它将(i)流行的偏见约束(如分布式Robust优化(DRO))与(ii)对抗面生成结合在一起,以避免杂乱关系的吸引并暴露真实的影像标记物。此外,我们提出了一个新的度量指标——杂乱关系抓取分数(SCLS),用于衡量分类器吸引杂乱关系的程度。通过对两个公共数据集(包括模拟和实际视觉损害)进行了全面的实验,我们示出了这种修正方法可以:(i)学习人口中的普遍性 markers,以及(ii)成功地忽略杂乱关系,而关注真实的疾病生物学。
BundleSeg: A versatile, reliable and reproducible approach to white matter bundle segmentation
results: 我们表明,BundleSeg 能够提高重复性和重现性,并且比状态对方法更快速。提高了 white matter 连接的精度和减少了变化,为 neuroscience 研究提供了一种有价值的工具,从而提高了轨迹学基于研究的敏感性和特点。Abstract
This work presents BundleSeg, a reliable, reproducible, and fast method for extracting white matter pathways. The proposed method combines an iterative registration procedure with a recently developed precise streamline search algorithm that enables efficient segmentation of streamlines without the need for tractogram clustering or simplifying assumptions. We show that BundleSeg achieves improved repeatability and reproducibility than state-of-the-art segmentation methods, with significant speed improvements. The enhanced precision and reduced variability in extracting white matter connections offer a valuable tool for neuroinformatic studies, increasing the sensitivity and specificity of tractography-based studies of white matter pathways.
摘要
Translation notes:* "iterative registration procedure" becomes "迭代注册过程" (dié dài zhù rèng gōng chéng)* "precise streamline search algorithm" becomes "精确的流线搜索算法" (jīng jí de liú xiàn sōu sòu algoritmos)* "tractogram clustering" becomes "股股聚合" (gōu gōu jù hè)* "simplifying assumptions" becomes "简化假设" (jiǎn huà jiè shè)* "neuroinformatic studies" becomes "神经信息学研究" (shén xiāo xìn xué yán jí)* "tractography-based studies" becomes "股股学研究" (gōu gōu xué yán jí)
Pixel Adaptive Deep Unfolding Transformer for Hyperspectral Image Reconstruction
paper_authors: Miaoyu Li, Ying Fu, Ji Liu, Yulun Zhang
for: 高度精度的频谱图像(HSI)重建方法
methods: Pixel Adaptive Deep Unfolding Transformer(PADUT),包括数据模块和优先模块,并且引入Non-local Spectral Transformer(NST)和快速傅立勤变换(FFT)来提高stage interaction
results: 在模拟和实际场景上表现出优于当前领先的HSI重建方法Abstract
Hyperspectral Image (HSI) reconstruction has made gratifying progress with the deep unfolding framework by formulating the problem into a data module and a prior module. Nevertheless, existing methods still face the problem of insufficient matching with HSI data. The issues lie in three aspects: 1) fixed gradient descent step in the data module while the degradation of HSI is agnostic in the pixel-level. 2) inadequate prior module for 3D HSI cube. 3) stage interaction ignoring the differences in features at different stages. To address these issues, in this work, we propose a Pixel Adaptive Deep Unfolding Transformer (PADUT) for HSI reconstruction. In the data module, a pixel adaptive descent step is employed to focus on pixel-level agnostic degradation. In the prior module, we introduce the Non-local Spectral Transformer (NST) to emphasize the 3D characteristics of HSI for recovering. Moreover, inspired by the diverse expression of features in different stages and depths, the stage interaction is improved by the Fast Fourier Transform (FFT). Experimental results on both simulated and real scenes exhibit the superior performance of our method compared to state-of-the-art HSI reconstruction methods. The code is released at: https://github.com/MyuLi/PADUT.
摘要