paper_authors: Aren Beagley, Hannah Richards, Joshua W. Giles
for: correction of partial volume effects in CT images
methods: new algorithm based on previous work, no pre-processing or user input required, applied directly to CT images
results: improved accuracy of surface strain predictions in experimental three point bending tests compared to original, uncorrected CT imagesAbstract
Partial Volume effects are present at the boundary between any two types of material in a CT image due to the scanner's Point Spread Function, finite voxel resolution, and importantly, the discrepancy in radiodensity between the two materials. In this study a new algorithm is developed and validated that builds on previously published work to enable the correction of partial volume effects at cortical bone boundaries. Unlike past methods, this algorithm does not require pre-processing or user input to achieve the correction, and the correction is applied directly onto a set of CT images, which enables it to be used in existing computational modelling workflows. The algorithm was validated by performing experimental three point bending tests on porcine fibulae specimen and comparing the experimental results to finite element results for models created using either the original, uncorrected CT images or the partial volume corrected images. Results demonstrated that the models created using the partial volume corrected images did improved the accuracy of the surface strain predictions. Given this initial validation, this algorithm is a viable method for overcoming the challenge of partial volume effects in CT images. Thus, future work should be undertaken to further validate the algorithm with human tissues and through coupling it with a range of different finite element creation workflows to verify that it is robust and agnostic to the chosen material mapping strategy.
摘要
<> CT 图像中的部分体积效应出现在任何两种材料之间的边界上,这是因为扫描仪的点扩散函数、粒子分辨率以及材料的辐射密度差异。在这项研究中,一种新的算法被开发并验证,以解决在 cortical bone 边界上的部分体积效应。与过去的方法不同的是,这种算法不需要先期处理或用户输入来实现修正,而且修正直接应用于 CT 图像集,因此可以在现有的计算模型工作流程中使用。这种算法在使用三点弯曲试验和猪骨脚模型进行验证后得到了证明,模型使用未修正 CT 图像时的表面弯曲预测结果比较准确。基于这个初步验证,这种算法是一种可靠的方法,未来的工作应该继续验证这种算法在人类组织中的效果,并通过将其与不同的材料映射策略集成来验证其是否具有抗耗荷性。
Deep Learning Approach for Large-Scale, Real-Time Quantification of Green Fluorescent Protein-Labeled Biological Samples in Microreactors
paper_authors: Yuanyuan Wei, Sai Mu Dalike Abaxi, Nawaz Mehmood, Luoquan Li, Fuyang Qu, Guangyao Cheng, Dehua Hu, Yi-Ping Ho, Scott Wu Yuan, Ho-Pui Ho for: 这个研究旨在开发一种基于深度学习的批处理管线,以自动分类和量化GFP标记的微反应器。methods: 该方法使用深度学习算法自动分类和量化GFP标记的微反应器,并且可以在标准实验室fluorescence Mikroskop用于实时精确量化。results: 该研究发现,使用该方法可以准确预测微反应器的大小和占据状态,并且可以在2.5秒钟内量化超过2,000个微反应器(在10张图像中),并且具有1000倍的分辨率。Abstract
Absolute quantification of biological samples entails determining expression levels in precise numerical copies, offering enhanced accuracy and superior performance for rare templates. However, existing methodologies suffer from significant limitations: flow cytometers are both costly and intricate, while fluorescence imaging relying on software tools or manual counting is time-consuming and prone to inaccuracies. In this study, we have devised a comprehensive deep-learning-enabled pipeline that enables the automated segmentation and classification of GFP (green fluorescent protein)-labeled microreactors, facilitating real-time absolute quantification. Our findings demonstrate the efficacy of this technique in accurately predicting the sizes and occupancy status of microreactors using standard laboratory fluorescence microscopes, thereby providing precise measurements of template concentrations. Notably, our approach exhibits an analysis speed of quantifying over 2,000 microreactors (across 10 images) within remarkably 2.5 seconds, and a dynamic range spanning from 56.52 to 1569.43 copies per micron-liter. Furthermore, our Deep-dGFP algorithm showcases remarkable generalization capabilities, as it can be directly applied to various GFP-labeling scenarios, including droplet-based, microwell-based, and agarose-based biological applications. To the best of our knowledge, this represents the first successful implementation of an all-in-one image analysis algorithm in droplet digital PCR (polymerase chain reaction), microwell digital PCR, droplet single-cell sequencing, agarose digital PCR, and bacterial quantification, without necessitating any transfer learning steps, modifications, or retraining procedures. We firmly believe that our Deep-dGFP technique will be readily embraced by biomedical laboratories and holds potential for further development in related clinical applications.
摘要
全程量化生物样本的实现需要确定表达水平的准确数值,提供了更高的精度和性能,特别是对于罕见模板。然而,现有的方法ologies有限,流率计价仪器costly和复杂,而基于软件工具或手动计数的抗体影像分析方法时间consuming和不准确。在本研究中,我们开发了一个涵盖全 Deep-learning-enabled 管道,可以自动 segmentation和类型化 GFP(绿色抗体)标记的微反应器,实现实时全程量化。我们的发现表明该技术可以准确预测微反应器的大小和占用状态,从而提供精确的模板浓度测量。另外,我们的 Deep-dGFP 算法展示了杰出的通用能力,可以直接应用于多种 GFP 标记方式,包括滴度基本、微瓶基本、agarose基本生物应用。根据我们所知,这是首次实现了无需转移学习步骤、修改或重训练的全程量化图像分析算法,可以应用于批量数计、微瓶数计、滴度单细Sequencing、agarose数计和细菌量化。我们 firmly believe 的 Deep-dGFP 技术将被生物医学实验室广泛采用,并具有进一步发展的临床应用潜力。