results: 该方法在大量临床数据集上进行了评估,与之前的CNN基于的和transformer基于的模型相比,在Dice分数上表现出了更高的性能。此外,该方法生成的分割形状与人工标注更加相似,并避免了其他模型中的常见问题,如孔洞或 Fragmentation。Abstract
Cardiac Magnetic Resonance imaging (CMR) is the gold standard for assessing cardiac function. Segmenting the left ventricle (LV), right ventricle (RV), and LV myocardium (MYO) in CMR images is crucial but time-consuming. Deep learning-based segmentation methods have emerged as effective tools for automating this process. However, CMR images present additional challenges due to irregular and varying heart shapes, particularly in basal and apical slices. In this study, we propose a classifier-guided two-stage network with an all-slice fusion transformer to enhance CMR segmentation accuracy, particularly in basal and apical slices. Our method was evaluated on extensive clinical datasets and demonstrated better performance in terms of Dice score compared to previous CNN-based and transformer-based models. Moreover, our method produces visually appealing segmentation shapes resembling human annotations and avoids common issues like holes or fragments in other models' segmentations.
摘要
卡ди亚磁共振成像(CMR)是评估心脏功能的标准方法。在CMR图像中,正确分割左心室(LV)、右心室(RV)和心肌(MYO)是关键,但是却是耗时的。深度学习基于的分割方法在 automating 这个过程中表现出了有效的特点。然而,CMR图像又具有心形不规则和不同的心形特征,特别是在基层和脊梁slice中。在这项研究中,我们提议一种类型导向的两阶段网络,以增强CMR分割精度,特别是在基层和脊梁slice中。我们的方法在丰富的临床数据集上进行了评估,并与之前的CNN基于的和 transformer基于的模型相比,表现出更高的Dice分数。此外,我们的方法生成的分割形状与人工标注相似,并避免了其他模型中的常见问题,如孔洞或 Fragmentation。
Online Targetless Radar-Camera Extrinsic Calibration Based on the Common Features of Radar and Camera
results: 我们的实验结果显示,我们的提案方法可以实现高精度和稳定性的单一整合。Abstract
Sensor fusion is essential for autonomous driving and autonomous robots, and radar-camera fusion systems have gained popularity due to their complementary sensing capabilities. However, accurate calibration between these two sensors is crucial to ensure effective fusion and improve overall system performance. Calibration involves intrinsic and extrinsic calibration, with the latter being particularly important for achieving accurate sensor fusion. Unfortunately, many target-based calibration methods require complex operating procedures and well-designed experimental conditions, posing challenges for researchers attempting to reproduce the results. To address this issue, we introduce a novel approach that leverages deep learning to extract a common feature from raw radar data (i.e., Range-Doppler-Angle data) and camera images. Instead of explicitly representing these common features, our method implicitly utilizes these common features to match identical objects from both data sources. Specifically, the extracted common feature serves as an example to demonstrate an online targetless calibration method between the radar and camera systems. The estimation of the extrinsic transformation matrix is achieved through this feature-based approach. To enhance the accuracy and robustness of the calibration, we apply the RANSAC and Levenberg-Marquardt (LM) nonlinear optimization algorithm for deriving the matrix. Our experiments in the real world demonstrate the effectiveness and accuracy of our proposed method.
摘要
感知融合是自动驾驶和自动机器人的关键技术,而雷达-摄像头融合系统在过去几年中得到了广泛的应用。然而,为了确保有效的感知融合,需要进行精准的协调。协调包括内在协调和外在协调,其中外在协调对于实现准确的感知融合是非常重要的。然而,许多目标基于的协调方法需要复杂的操作程序和丰富的实验条件,这会对研究人员进行重现结果的困难。为解决这个问题,我们介绍了一种新的方法,利用深度学习提取雷达数据(即距离-Doppler-角度数据)和摄像头图像中的公共特征。而不是直接表示这些公共特征,我们的方法将这些公共特征直接地用于匹配雷达和摄像头系统中的同一个目标。具体来说,提取的公共特征可以作为一个在线无目标协调方法的示例,用于计算雷达和摄像头系统之间的外在协调矩阵。为了提高准确性和稳定性,我们在这个特征基础上应用了RANSAC和Levenberg-Marquardt(LM)非线性优化算法来得到矩阵。我们在实际世界中进行的实验表明了我们的提案的有效性和准确性。