results: 本研究提出了一种新的快速频率估计方程,可以在单相系统中实现高精度的频率估计。数值示例表明,该方程在不均匀、单相系统中具有高精度和稳定性。Abstract
The paper discusses the relationships between electrical quantities, namely voltages and frequency, and affine differential geometry ones, namely affine arc length and curvature. Moreover, it establishes a link between frequency and time derivatives of voltage, through the utilization of affine differential geometry invariants. Based on this link, a new instantaneous frequency estimation formula is proposed, which is particularly suited for unbalanced systems. An application of the proposed formula to single-phase systems is also provided. Several numerical examples based on balanced, unbalanced, as well as single-phase systems illustrate the findings of the paper.
摘要
文章讨论了电量和几何量之间的关系,即电压和频率,以及几何均衡量的抽象弹性长度和曲率。文章还将频率与电压的时间导数联系起来,通过使用几何均衡量的 invariants。基于这个联系,文章提出了一种新的快速频率估算公式,特别适用于不均衡系统。文章还提供了应用于单相系统的示例。文章的结论由多个平衡、不平衡和单相系统的数据 illustrate。Note: "几何均衡量" (affine differential geometry) is a bit of a mouthful in Chinese, so I translated it as "几何均衡量" (affine differential geometry) instead of using the more common "几何学" (geometry) or "几何均衡" (affine geometry).
Channel Estimation via Loss Field: Accurate Site-Trained Modeling for Shadowing Prediction
methods: 该论文提出了一种新的通道模型,即Channel Estimation using Loss Field(CELF),该模型使用了部署在地区的通道损失测量数据和bayesian线性回归方法来估算地区具有特定损失场的loss field。
results: 论文使用了广泛的测量数据显示,CELF可以降低通道估计的方差 by up to 56%,并且在 variance reduction和训练效率方面超过了3种popular机器学习方法。Abstract
Future mobile ad hoc networks will share spectrum between many users. Channels will be assigned on the fly to guarantee signal and interference power requirements for requested links. Channel losses must be re-estimated between many pairs of users as they move and as environmental conditions change. Computational complexity must be low, precluding the use of some accurate but computationally intensive site-specific channel models. Channel model errors must be low, precluding the use of standard statistical channel models. We propose a new channel model, CELF, which uses channel loss measurements from a deployed network in the area and a Bayesian linear regression method to estimate a site-specific loss field for the area. The loss field is explainable as the site's 'shadowing' of the radio propagation across the area of interest, but it requires no site-specific terrain or building information. Then, for any arbitrary pair of transmitter and receiver positions, CELF sums the loss field near the link line to estimate its channel loss. We use extensive measurements to show that CELF lowers the variance of channel estimates by up to 56%. It outperforms 3 popular machine learning methods in variance reduction and training efficiency.
摘要
未来的移动广播网络将共享多个用户的频率谱,为请求链接确保信号和干扰电磁谱的功率要求。在多个用户之间移动和环境条件发生变化时,通道将在实时基础上分配。由于计算复杂性需要低,因此排除了一些精度高但计算复杂度高的站点特定通道模型。通道模型错误也需要低,因此排除了标准的统计学通道模型。我们提出了一种新的通道模型,即 CEFL,它使用已部署网络中的通道损失测量和 bayesian 线性回归方法来估算区域特定的损失场。这个损失场可以解释为当地的“遮挡”,但无需站点特定的地形或建筑信息。然后,为任意传输器和接收器位置对,CEFL将近邻链接线上的损失场总和来估算链接损失。我们使用了广泛的测量数据表明,CEFL可以降低通道估计的方差,最多降低56%。同时,它在 variance 降低和训练效率上比3种受欢迎的机器学习方法表现更好。
Measuring Thermal Profiles in High Explosives using Neural Networks
results: 通过对实验和 simulations中的数据进行分析,本研究发现了一种可以评估高爆物质的安全状况的方法,并且可以在各种应用场景中提供温度profile的内部测量。研究还发现,使用更多的声学Receiver和更高的温度预测分辨率可以提高算法的准确性。Abstract
We present a new method for calculating the temperature profile in high explosive (HE) material using a Convolutional Neural Network (CNN). To train/test the CNN, we have developed a hybrid experiment/simulation method for collecting acoustic and temperature data. We experimentally heat cylindrical containers of HE material until detonation/deflagration, where we continuously measure the acoustic bursts through the HE using multiple acoustic transducers lined around the exterior container circumference. However, measuring the temperature profile in the HE in experiment would require inserting a high number of thermal probes, which would disrupt the heating process. Thus, we use two thermal probes, one at the HE center and one at the wall. We then use finite element simulation of the heating process to calculate the temperature distribution, and correct the simulated temperatures based on the experimental center and wall temperatures. We calculate temperature errors on the order of 15{\deg}C, which is approximately 12% of the range of temperatures in the experiment. We also investigate how the algorithm accuracy is affected by the number of acoustic receivers used to collect each measurement and the resolution of the temperature prediction. This work provides a means of assessing the safety status of HE material, which cannot be achieved using existing temperature measurement methods. Additionally, it has implications for range of other applications where internal temperature profile measurements would provide critical information. These applications include detecting chemical reactions, observing thermodynamic processes like combustion, monitoring metal or plastic casting, determining the energy density in thermal storage capsules, and identifying abnormal battery operation.
摘要
我们提出了一种新的方法来计算高爆物(HE)材料中的温度分布,使用卷积神经网络(CNN)。为了训练/测试CNN,我们开发了一种混合实验/模拟方法来收集振荡和温度数据。我们通过对HE材料中的圆柱形容器进行热处理,直到发生激发/燃烧,并在HE表面附近安装多个声学传感器来记录振荡。但是,在实验中测量HE材料中的温度分布需要插入大量的热度探针,这会对热处理进行干扰。因此,我们使用了两个热度探针,一个位于HE的中心和一个位于容器壁上。我们然后使用HE材料的热处理的数学模拟来计算温度分布,并根据实验中心和壁温度进行修正。我们计算的温度误差在15℃之间,相当于实验中温度范围的12%。我们还研究了使用声学传感器来收集测量数据的数量和分辨率如何影响算法的准确性。这项工作为HE材料的安全状况评估提供了一种新的方法,同时也对其他应用有着潜在的影响。这些应用包括检测化学反应、观察燃烧过程、监测金属或塑料铸造、测量热存储囊中的能量密度、并识别异常电池运行。
Ordered Reliability Direct Error Pattern Testing Decoding Algorithm
paper_authors: Reza Hadavian, Xiaoting Huang, Dmitri Truhachev, Kamal El-Sankary, Hamid Ebrahimzad, Hossein Najafi
for: 这篇论文是为了提出一种新的通用软决策解码算法,用于二进制块编码。
methods: 该算法使用ordered reliability direct error pattern testing(ORDEPT)技术,并对各种流行的短高速编码进行了测试,结果显示ORDEPT在与相同复杂性的其他解码算法相比,具有较低的解码错误概率和延迟。
results: 该paper的结果表明,ORDEPT可以高效地查找多个候选码word,并在迭代解码中提高产生软输出的能力。Abstract
We introduce a novel universal soft-decision decoding algorithm for binary block codes called ordered reliability direct error pattern testing (ORDEPT). Our results, obtained for a variety of popular short high-rate codes, demonstrate that ORDEPT outperforms state-of-the-art decoding algorithms of comparable complexity such as ordered reliability bits guessing random additive noise decoding (ORBGRAND) in terms of the decoding error probability and latency. The improvements carry on to the iterative decoding of product codes and convolutional product-like codes, where we present a new adaptive decoding algorithm and demonstrate the ability of ORDEPT to efficiently find multiple candidate codewords to produce soft output.
摘要
我们介绍了一种新的通用软决策解码算法,即顺序可靠性直接错误模式测试(ORDEPT),用于二进制块码。我们的结果,在各种受欢迎的短高速码中,示出了ORDEPT比同等复杂度的批量解码算法,如顺序可靠性位元随机加速错误推测解码(ORBGRAND),在解码错误probability和延迟方面表现更好。这些改进继续延伸到产生转换码和几何产生码的迭代解码中,我们提出了一个新的适应解码算法,并证明了ORDEPT可以高效地找到多个候选码word来生成软出力。
One-Bit Byzantine-Tolerant Distributed Learning via Over-the-Air Computation
results: 研究人员通过分析和验证了该框架在Byzantine攻击和无线环境下的性能,并证明了其在分布式学习中的稳定性和可靠性。Abstract
Distributed learning has become a promising computational parallelism paradigm that enables a wide scope of intelligent applications from the Internet of Things (IoT) to autonomous driving and the healthcare industry. This paper studies distributed learning in wireless data center networks, which contain a central edge server and multiple edge workers to collaboratively train a shared global model and benefit from parallel computing. However, the distributed nature causes the vulnerability of the learning process to faults and adversarial attacks from Byzantine edge workers, as well as the severe communication and computation overhead induced by the periodical information exchange process. To achieve fast and reliable model aggregation in the presence of Byzantine attacks, we develop a signed stochastic gradient descent (SignSGD)-based Hierarchical Vote framework via over-the-air computation (AirComp), where one voting process is performed locally at the wireless edge by taking advantage of Bernoulli coding while the other is operated over-the-air at the central edge server by utilizing the waveform superposition property of the multiple-access channels. We comprehensively analyze the proposed framework on the impacts including Byzantine attacks and the wireless environment (channel fading and receiver noise), followed by characterizing the convergence behavior under non-convex settings. Simulation results validate our theoretical achievements and demonstrate the robustness of our proposed framework in the presence of Byzantine attacks and receiver noise.
摘要
分布式学习已成为智能应用领域的扩展 Computational parallelism 方法之一,从互联网东西 (IoT) 到自动驾驶和医疗行业。这篇论文研究了无线数据中心网络中的分布式学习,该网络包括中央边缘服务器和多个边缘工作者,共同训练共享全球模型,并且从并行计算中受益。然而,分布式结构导致学习过程中的容易受到故障和恶意攻击,以及由 periodic 信息交换过程引起的严重通信和计算开销。为了在存在恶意攻击情况下实现快速和可靠的模型聚合,我们提出了基于签名随机梯度下降 (SignSGD) 的层次投票框架,该框架通过 wireless 边缘上进行本地 Bernoulli 编码,而在中央边缘服务器上通过多ступChannel 的波形重叠性特性进行无线计算。我们系统分析了提议的框架,包括恶意攻击和无线环境(通道抑降和接收噪声)的影响,然后对非拟合情况进行分析。实验结果证明我们的理论成果,并在存在恶意攻击和接收噪声情况下展示了我们的提议框架的可靠性。
Parallel Log Spectra index (PaLOSi): a quality metric in large scale resting EEG preprocessing
results: 这 paper 的结果表明,PaLOS 存在可能导致不正确的连接分析结果,并且提出了一种基于 common principal component analysis 的 PaLOS index (PaLOSi),可以检测 PaLOS 的存在。PaLOSi 的性能在 30094 个 EEG 数据集上进行了测试,结果显示 PaLOSi 可以检测不正确的预处理结果,并且具有较好的Robustness。Abstract
Toward large scale electrophysiology data analysis, many preprocessing pipelines are developed to reject artifacts as the prerequisite step before the downstream analysis. A mainstay of these pipelines is based on the data driven approach -- Independent Component Analysis (ICA). Nevertheless, there is little effort put to the preprocessing quality control. In this paper, attentions to this issue were carefully paid by our observation that after running ICA based preprocessing pipeline: some subjects showed approximately Parallel multichannel Log power Spectra (PaLOS), namely, multichannel power spectra are proportional to each other. Firstly, the presence of PaLOS and its implications to connectivity analysis were described by real instance and simulation; secondly, we built its mathematical model and proposed the PaLOS index (PaLOSi) based on the common principal component analysis to detect its presence; thirdly, the performance of PaLOSi was tested on 30094 cases of EEG from 5 databases. The results showed that 1) the PaLOS implies a sole source which is physiologically implausible. 2) PaLOSi can detect the excessive elimination of brain components and is robust in terms of channel number, electrode layout, reference, and the other factors. 3) PaLOSi can output the channel and frequency wise index to help for in-depth check. This paper presented the PaLOS issue in the quality control step after running the preprocessing pipeline and the proposed PaLOSi may serve as a novel data quality metric in the large-scale automatic preprocessing.
摘要
大规模电physiology数据分析中,许多预处理管道被开发出来拒绝噪声作为下游分析的前提步骤。主流的预处理管道基于数据驱动方法---独立组件分析(ICA)。然而,对预处理质量控制的努力不多。在这篇论文中,我们仔细注意到,在运行基于ICA的预处理管道后,一些主体显示了相似的多通道峰谱特征(PaLOS),即多通道峰谱的强度相对彼此成比例。我们首先描述了PaLOS的存在和其对连接分析的影响,然后构建了其数学模型,并基于共同主成分分析提出了PaLOS指数(PaLOSi)来检测其存在。最后,我们测试了PaLOSi在5个数据库中的30094个EEG样本。结果表明:1)PaLOS存在唯一的源,这是生物学上不可能的。2)PaLOSi可以检测预处理过程中的质量问题,并且在通道数、电极布局、参照、其他因素等方面具有稳定性。3)PaLOSi可以输出通道和频率 wise的指数,帮助进行深入的检查。本文描述了预处理管道后的质量控制步骤中PaLOS问题,并提出了PaLOSi作为大规模自动预处理中的新数据质量指标。
Supporting UAVs with Edge Computing: A Review of Opportunities and Challenges
results: 研究发现,通过边缘计算可以提高无人机的任务完成速度、能效性和可靠性,并且可以应用于多个领域和行业。Abstract
Over the last years, Unmanned Aerial Vehicles (UAVs) have seen significant advancements in sensor capabilities and computational abilities, allowing for efficient autonomous navigation and visual tracking applications. However, the demand for computationally complex tasks has increased faster than advances in battery technology. This opens up possibilities for improvements using edge computing. In edge computing, edge servers can achieve lower latency responses compared to traditional cloud servers through strategic geographic deployments. Furthermore, these servers can maintain superior computational performance compared to UAVs, as they are not limited by battery constraints. Combining these technologies by aiding UAVs with edge servers, research finds measurable improvements in task completion speed, energy efficiency, and reliability across multiple applications and industries. This systematic literature review aims to analyze the current state of research and collect, select, and extract the key areas where UAV activities can be supported and improved through edge computing.
摘要
过去几年,无人飞行器(UAV)技术已经减少了很多,包括感知和计算能力等方面。这使得无人飞行器可以更加高效地进行自主导航和视觉跟踪应用。然而,计算复杂任务的需求增长 faster than 电池技术的进步,这开 up了可以通过边缘计算提高无人飞行器性能的可能性。在边缘计算中,边缘服务器可以在地理上投入策略的部署下实现更低的响应时间,相比于传统的云服务器。此外,这些服务器可以在无人飞行器上保持更高的计算性能,因为它们不受电池限制。通过将这些技术相结合,研究发现在多个应用和领域中,任务完成速度、能效性和可靠性都有所提高。这个系统性文献综述的目的是分析当前研究的状况,收集、选择和提取无人飞行器活动中可以通过边缘计算提高的关键领域。
Deep Learning Based Detection on RIS Assisted RSM and RSSK Techniques
results: Monte Carlo simulate results show that B-DNN可以与最大可能性(ML)相比,并且在比较于匀速检测器(Greedy detector)的情况下,提供了更好的检测性能。Abstract
The reconfigurable intelligent surface (RIS) is considered a crucial technology for the future of wireless communication. Recently, there has been significant interest in combining RIS with spatial modulation (SM) or space shift keying (SSK) to achieve a balance between spectral and energy efficiency. In this paper, we have investigated the use of deep learning techniques for detection in RIS-aided received SM (RSM)/received-SSK (RSSK) systems over Weibull fading channels, specifically by extending the RIS-aided SM/SSK system to a specific case of the conventional SM system. By employing the concept of neural networks, the study focuses on model-driven deep learning detection namely block deep neural networks (B-DNN) for RIS-aided SM systems and compares its performance against maximum likelihood (ML) and greedy detectors. Finally, it has been demonstrated by Monte Carlo simulation that while B-DNN achieved a bit error rate (BER) performance close to that of ML, it gave better results than the Greedy detector.
摘要
“弹性智能表面”(RIS)被视为未来无线通信技术的重要一环。近期,有许多研究将RIS与空间变化(SM)或空间移动键(SSK)结合以实现频率和能源效率的平衡。本研究使用深度学习技术进行RIS-aided SM/RSSK系统中的检测,具体是将传统SM系统扩展到RIS-aided SM系统。通过使用神经网络的概念,本研究专注于使用堆层神经网络(B-DNN)进行检测,并与最大可能性(ML)和探测器进行比较。最后,通过 Monte Carlo 模拟,发现B-DNN对比于ML的比较好,并且在比较探测器时表现更好。
Dynamic Resource Management in Integrated NOMA Terrestrial-Satellite Networks using Multi-Agent Reinforcement Learning
paper_authors: Ali Nauman, Haya Mesfer Alshahrani, Nadhem Nemri, Kamal M. Othman, Nojood O Aljehane, Mashael Maashi, Ashit Kumar Dutta, Mohammed Assiri, Wali Ullah Khan
results: 我们的提议使用了多代理搜索深度决策函数算法(MADDPG)优化用户关联、缓存设计和传输功率控制,从而提高能效率。Here’s the breakdown of each point in English:1. for: The paper is written to address the challenges of integrated satellite-terrestrial networks.2. methods: The paper proposes a resource allocation framework that leverages local cache pool deployments and non-orthogonal multiple access (NOMA) to reduce time delays and improve energy efficiency.3. results: The proposed approach using a multi-agent enabled deep deterministic policy gradient algorithm (MADDPG) achieves significantly higher energy efficiency and reduced time delays compared to existing methods.Abstract
This study introduces a resource allocation framework for integrated satellite-terrestrial networks to address these challenges. The framework leverages local cache pool deployments and non-orthogonal multiple access (NOMA) to reduce time delays and improve energy efficiency. Our proposed approach utilizes a multi-agent enabled deep deterministic policy gradient algorithm (MADDPG) to optimize user association, cache design, and transmission power control, resulting in enhanced energy efficiency. The approach comprises two phases: User Association and Power Control, where users are treated as agents, and Cache Optimization, where the satellite (Bs) is considered the agent. Through extensive simulations, we demonstrate that our approach surpasses conventional single-agent deep reinforcement learning algorithms in addressing cache design and resource allocation challenges in integrated terrestrial-satellite networks. Specifically, our proposed approach achieves significantly higher energy efficiency and reduced time delays compared to existing methods.
摘要
Random Sampling of Bandlimited Graph Signals from Local Measurements
results: numerical experiments表明了该方法的效果。Abstract
The random sampling on graph signals is one of the fundamental topics in graph signal processing. In this letter, we consider the random sampling of k-bandlimited signals from the local measurements and show that no more than O(klogk) measurements with replacement are sufficient for the accurate and stable recovery of any k-bandlimited graph signals. We propose two random sampling strategies based on the minimum measurements, i.e., the optimal sampling and the estimated sampling. The geodesic distance between vertices is introduced to design the sampling probability distribution. Numerical experiments are included to show the effectiveness of the proposed methods.
摘要
《随机抽取图像信号处理》是图像信号处理领域的基本主题之一。在本信,我们考虑了基于地方测量的随机抽取k-带限信号,并证明了只需要O(klogk)次抽取 measurements with replacement 可以准确地重建任何k-带限图像信号。我们提出了两种基于最小测量的随机抽取策略,即最优抽取和估计抽取。我们通过地odesic distance between vertices来设计抽取概率分布。numerical experiments 表明我们提出的方法的效果。Here's the word-for-word translation:“随机抽取图像信号处理”是图像信号处理领域的基本主题之一。在本信,我们考虑了基于地方测量的随机抽取k-带限信号,并证明了只需要O(klogk)次抽取 measurements with replacement 可以准确地重建任何k-带限图像信号。我们提出了两种基于最小测量的随机抽取策略,即最佳抽取和估计抽取。我们通过地odesic distance between vertices来设计抽取概率分布。numerical experiments 表明我们提出的方法的效果。