eess.SP - 2023-09-11

A Novel Catastrophic Condition for Periodically Time-varying Convolutional Encoders Based on Time-varying Equivalent Convolutional Encoders

  • paper_url: http://arxiv.org/abs/2309.05849
  • repo_url: None
  • paper_authors: Fan Jiang
  • for: 本研究考虑了时variant convolutional编码器的灾变性问题。
  • methods: 本文提出了一种基于时variant equivalent convolutional编码器的新的灾变性判定方法,以及一种将灾变性 convolutional编码器转换为非灾变性编码器的技术。
  • results: 研究表明,使用时variant equivalent convolutional编码器可以减少灾变性判定方法的复杂性,并且可以将灾变性 convolutional编码器转换为非灾变性编码器。
    Abstract A convolutional encoder is said to be catastrophic if it maps an information sequence of infinite weight into a code sequence of finite weight. As a consequence of this mapping, a finite number of channel errors may cause an infinite number of information bit errors when decoding. This situation should be avoided. A catastrophic condition to determine if a time-invariant convolutional encoder is catastrophic or not is stated in \cite{Massey:LSC}. Palazzo developed this condition for periodically time-varying convolutional encoders in \cite{Palazzo:Analysis}. Since Palazzo's condition is based on the state transition table of the constituent encoders, its complexity increases exponentially with the number of memory elements in the encoders. A novel catastrophic condition making use of time-varying equivalent convolutional encoders is presented in this letter. A technique to convert a catastrophic periodically time-varying convolutional encoder into a non-catastrophic one can also be developed based on these encoders. Since they do not involve the state transitions of the convolutional encoder, the time complexity of these methods grows linearly with the encoder memory.
    摘要 一个卷积编码器是说服力 catastrophic 如果它将无限长的信息序列映射到 finite 长的编码序列中。由于这种映射,在解码时可能出现一个有限数量的通道错误导致无限多个信息位错误。这种情况应避免。一个用于确定时间不变的卷积编码器是 catastrophic 或者不是的条件是在 \cite{Massey:LSC} 中提出的。Palazzo 在 \cite{Palazzo:Analysis} 中为 periodic time-varying 卷积编码器提出了这种条件。由于这种条件基于卷积编码器的状态转移表,其复杂度呈指数增长与卷积编码器的内存元素数。本文提出了一种使用时变等价的卷积编码器来解决这个问题的新方法。此外,这种方法还可以基于这些编码器来开发一种将 catastrophic periodic time-varying 卷积编码器转换为非 catastrophic 的技术。由于这些方法不涉及卷积编码器的状态转移,它们的时间复杂度随着编码器内存元素数的增长而呈线性增长。

Reinforcement Learning for Supply Chain Attacks Against Frequency and Voltage Control

  • paper_url: http://arxiv.org/abs/2309.05814
  • repo_url: https://github.com/amrmsab/rl-cps-attacks
  • paper_authors: Amr S. Mohamed, Sumin Lee, Deepa Kundur
  • for: 这个论文旨在探讨现代化电力系统受到供应链攻击的威胁,以及如何防范这类攻击。
  • methods: 这篇论文使用了回归学习技术,用于开发 integrate supply chain attacks 中的智能攻击。
  • results: 研究人员通过实验和模拟,发现了一些可能的干扰,包括频率和电压控制的干扰,并提供了一些防范策略。So, in English:
  • for: This paper aims to explore the threats posed by supply chain attacks on modernized power systems, and how to defend against them.
  • methods: The paper uses reinforcement learning technology to develop intelligent attacks incorporated into supply chain attacks against generation control devices.
  • results: The researchers found some possible disturbances, including frequency and voltage control disturbances, and provided some defense strategies through experiments and simulations.
    Abstract The ongoing modernization of the power system, involving new equipment installations and upgrades, exposes the power system to the introduction of malware into its operation through supply chain attacks. Supply chain attacks present a significant threat to power systems, allowing cybercriminals to bypass network defenses and execute deliberate attacks at the physical layer. Given the exponential advancements in machine intelligence, cybercriminals will leverage this technology to create sophisticated and adaptable attacks that can be incorporated into supply chain attacks. We demonstrate the use of reinforcement learning for developing intelligent attacks incorporated into supply chain attacks against generation control devices. We simulate potential disturbances impacting frequency and voltage regulation. The presented method can provide valuable guidance for defending against supply chain attacks.
    摘要 现在的电力系统现代化,包括新设备安装和升级,会导致电力系统暴露于供应链攻击中。供应链攻击对电力系统构成了重要的威胁,允许黑客绕过网络防火墙并执行有意义的攻击。随着机器智能的快速发展,黑客将利用这种技术创造出复杂和适应能力强的攻击,并将其 integrate into supply chain attacks。我们使用强化学习来开发 incorporate 了供应链攻击的生成控制设备攻击。我们通过模拟可能影响频率和电压稳定的干扰来评估这种方法。这种方法可以为防御供应链攻击提供有价值的指导。

Adversarial Score-Based Generative Model for AmBC Channel Estimation

  • paper_url: http://arxiv.org/abs/2309.05776
  • repo_url: None
  • paper_authors: Fatemeh Rezaei, S. Mojtaba Marvasti-Zadeh, Chintha Tellambura, Amine Maaref
  • for: 这个论文是为了提出一种使用深度学习在概率 frameworks 中进行多个标签AmBC网络中的直接和层次通道共享估计的先锋方法。
  • methods: 这个方法使用了一种 adversarial score-based 生成模型进行训练,以获得通道分布。然后,我们通过 Annealed Langevin sampling 技术来实现采样从 posterior distribution。
  • results: 对比标准 least square 估计方法,我们的方法在直接通道和层次通道上都达到了remarkable improvement,并且在层次通道上超过了MMSE估计器的性能。
    Abstract This letter presents a pioneering method that employs deep learning within a probabilistic framework for the joint estimation of both direct and cascaded channels in an ambient backscatter (AmBC) network comprising multiple tags. In essence, we leverage an adversarial score-based generative model for training, enabling the acquisition of channel distributions. Subsequently, our channel estimation process involves sampling from the posterior distribution, facilitated by the annealed Langevin sampling (ALS) technique. Notably, our method demonstrates substantial advancements over standard least square (LS) estimation techniques, achieving performance akin to that of the minimum mean square error (MMSE) estimator for the direct channel, and outperforming it for the cascaded channels.
    摘要

Potentials of Deterministic Radio Propagation Simulation for AI-Enabled Localization and Sensing

  • paper_url: http://arxiv.org/abs/2309.05650
  • repo_url: None
  • paper_authors: Albrecht Michler, Jonas Ninnemann, Jakob Krauthäuser, Paul Schwarzbach, Oliver Michler
  • for: 本文是为了开发和验证基于机器学习和人工智能的下一代网络地区化和感知方法而写的。
  • methods: 本文提出了一个集成的工具链,包括决定性通道模型和电磁波传播模拟,用于开发和验证这些方法。
  • results: 本文通过示例场景分类来获取飞机客舱环境中相关的地位相关通道参数。
    Abstract Machine leaning (ML) and artificial intelligence (AI) enable new methods for localization and sensing in next-generation networks to fulfill a wide range of use cases. These approaches rely on learning approaches that require large amounts of training and validation data. This paper addresses the data generation bottleneck to develop and validate such methods by proposing an integrated toolchain based on deterministic channel modeling and radio propagation simulation. The toolchain is demonstrated exemplary for scenario classification to obtain localization-related channel parameters within an aircraft cabin environment.
    摘要 机器学习(ML)和人工智能(AI)可以开发新的本地化和感知方法,以满足各种应用场景。这些方法需要大量的训练和验证数据。这篇论文解决了数据生成瓶颈,以开发和验证这些方法,通过决定性通道模型和无线传播 simulation 提供了一个集成的工具链。工具链在飞机客舱环境中进行场景分类,以获取本地化相关的通道参数。

Grid-based Hybrid 3DMA GNSS and Terrestrial Positioning

  • paper_url: http://arxiv.org/abs/2309.05644
  • repo_url: None
  • paper_authors: Paul Schwarzbach, Albrecht Michler, Oliver Michler
  • for: 本研究旨在提高基于GNSS的地图导航和位置掌握技术,特别是将三维地图帮助GNSS定位和陆地系统 integrate into a 3DMA positioning framework.
  • methods: 该研究提出了一种非参数滤波方法,具体来说是3DMA多个epoch网格筛,用于紧密地融合GNSS和陆地信号。此外,还Addresses algorithmic challenges such as different measurement models and time synchronization.
  • results: 实验表明,在静态enario中,使用提出的方法可以实现减小位置误差至0.64米,而在动态enario中,平均位置误差为1.62米。这些结果证明了提出的方法的可行性和陆地信号的包含性。
    Abstract The paper discusses the increasing use of hybridized sensor information for GNSS-based localization and navigation, including the use of 3D map-aided GNSS positioning and terrestrial systems based on different geometric measurement principles. However, both GNSS and terrestrial systems are subject to negative impacts from the propagation environment, which can violate the assumptions of conventionally applied parametric state estimators. Furthermore, dynamic parametric state estimation does not account for multi-modalities within the state space leading to an information loss within the prediction step. In addition, the synchronization of non-deterministic multi-rate measurement systems needs to be accounted. In order to address these challenges, the paper proposes the use of a non-parametric filtering method, specifically a 3DMA multi-epoch Grid Filter, for the tight integration of GNSS and terrestrial signals. Specifically, the fusion of GNSS, Ultra-wide Band (UWB) and vehicle motion data is introduced based on a discrete state representation. Algorithmic challenges, including the use of different measurement models and time synchronization, are addressed. In order to evaluate the proposed method, real-world tests were conducted on an urban automotive testbed in both static and dynamic scenarios. We empirically show that we achieve sub-meter accuracy in the static scenario by averaging a positioning error of $0.64$ m, whereas in the dynamic scenario the average positioning error amounts to $1.62$ m. The paper provides a proof-of-concept of the introduced method and shows the feasibility of the inclusion of terrestrial signals in a 3DMA positioning framework in order to further enhance localization in GNSS-degraded environments.
    摘要 To address these challenges, the paper proposes the use of a non-parametric filtering method, specifically a 3DMA multi-epoch Grid Filter, for the tight integration of GNSS and terrestrial signals. Specifically, the fusion of GNSS, Ultra-wide Band (UWB), and vehicle motion data is introduced based on a discrete state representation. Algorithmic challenges, including the use of different measurement models and time synchronization, are addressed.In order to evaluate the proposed method, real-world tests were conducted on an urban automotive testbed in both static and dynamic scenarios. The results show that we achieve sub-meter accuracy in the static scenario by averaging a positioning error of $0.64$ m, whereas in the dynamic scenario the average positioning error amounts to $1.62$ m.The paper provides a proof-of-concept of the introduced method and shows the feasibility of the inclusion of terrestrial signals in a 3DMA positioning framework in order to further enhance localization in GNSS-degraded environments.Translated into Simplified Chinese:文章讨论了使用混合式感知器信息进行GNSS基于地图定位和导航的增加使用,包括使用3D地图帮助GNSS定位和陆地系统,以及不同的几何测量原理。然而,GNSS和陆地系统都受到传播环境的负面影响,可能违反常用的参数化状态估计器的假设。此外,动态参数状态估计不会考虑多模态在状态空间中的存在,导致估计中的信息损失。此外,不束定的多比率测量系统的同步问题也需要解决。为了解决这些挑战,文章提出使用非参数化滤波方法,具体是3DMA多个步Grid Filter,对GNSS和陆地信号进行紧密的集成。基于精确的状态表示,文章还提出了GNSS、Ultra-wide Band(UWB)和车辆运动数据的混合。算法挑战,包括不同测量模型和时间同步问题,也得到了解决。为了评估提出的方法,文章在城市自动车测试床上进行了实际测试,包括静止和动态场景。测试结果表明,在静止场景下,我们可以通过平均GNSS定位错误为0.64米来获得减少到米级准确性。而在动态场景下,GNSS定位错误的平均值为1.62米。文章提供了GNSS-降低环境中地图定位的证明性证明,并表明了包含陆地信号在3DMA定位框架中的可能性,以进一步提高GNSS定位精度。

A Comparative Analysis of Deep Reinforcement Learning-based xApps in O-RAN

  • paper_url: http://arxiv.org/abs/2309.05621
  • repo_url: None
  • paper_authors: Maria Tsampazi, Salvatore D’Oro, Michele Polese, Leonardo Bonati, Gwenael Poitau, Michael Healy, Tommaso Melodia
    for:* This paper focuses on the design and evaluation of Deep Reinforcement Learning (DRL) based xApps for Next Generation (NextG) wireless communication systems.methods:* The paper uses a comparative analysis to evaluate the impact of different DRL-based xApp designs on network performance.* The authors use 12 different xApps that embed DRL agents trained using different reward functions, with different action spaces, and with the ability to hierarchically control different network parameters.results:* The paper demonstrates that certain design choices can deliver the highest performance, while others might result in a competitive behavior between different classes of traffic with similar objectives.Here is the information in Simplified Chinese text:for:* 这篇论文关注下一代无线通信系统中 Deep Reinforcement Learning(DRL)基于 xApps 的设计和评估。methods:* 该论文使用比较分析来评估不同 DRL 基于 xApps 的设计对网络性能的影响。* 作者使用 12 个不同的 xApps,每个 xApp 都包含 DRL 代理训练使用不同的奖励函数、不同的动作空间和可以控制不同的网络参数。results:* 论文表明 certain 的设计选择可以实现最高性能,而其他选择可能会导致不同类型的流量之间的竞争。
    Abstract The highly heterogeneous ecosystem of Next Generation (NextG) wireless communication systems calls for novel networking paradigms where functionalities and operations can be dynamically and optimally reconfigured in real time to adapt to changing traffic conditions and satisfy stringent and diverse Quality of Service (QoS) demands. Open Radio Access Network (RAN) technologies, and specifically those being standardized by the O-RAN Alliance, make it possible to integrate network intelligence into the once monolithic RAN via intelligent applications, namely, xApps and rApps. These applications enable flexible control of the network resources and functionalities, network management, and orchestration through data-driven control loops. Despite recent work demonstrating the effectiveness of Deep Reinforcement Learning (DRL) in controlling O-RAN systems, how to design these solutions in a way that does not create conflicts and unfair resource allocation policies is still an open challenge. In this paper, we perform a comparative analysis where we dissect the impact of different DRL-based xApp designs on network performance. Specifically, we benchmark 12 different xApps that embed DRL agents trained using different reward functions, with different action spaces and with the ability to hierarchically control different network parameters. We prototype and evaluate these xApps on Colosseum, the world's largest O-RAN-compliant wireless network emulator with hardware-in-the-loop. We share the lessons learned and discuss our experimental results, which demonstrate how certain design choices deliver the highest performance while others might result in a competitive behavior between different classes of traffic with similar objectives.
    摘要 Next Generation(NextG)无线通信系统的高度多样化生态系统需要新的网络编组方法,以实时动态和优化网络功能和操作,适应交通条件的变化和满足多样化的服务质量(QoS)需求。开放式无线接入网络(RAN)技术,尤其是由O-RAN联盟标准化的技术,使得可以在无线网络中 интеGRATE智能应用程序,包括xApps和rApps。这些应用程序允许在数据驱动的控制循环中flexibly控制网络资源和功能,网络管理和编组。虽然最近的研究已经证明了深度奖励学习(DRL)可以控制O-RAN系统,但是如何在设计中避免创建冲突和不公平的资源分配策略仍然是一个开放的挑战。在这篇论文中,我们进行了对12个不同的xApp设计的比较分析,以评估它们对网络性能的影响。我们使用了不同的奖励函数、不同的动作空间和可以层次控制不同的网络参数来训练DRL代理。我们使用Colosseum,全球最大的O-RAN兼容无线网络仿真器,进行了实验和评估这些xApps。我们分享了我们所学到的经验和实验结果,这些结果表明了某些设计选择可以实现最高性能,而其他选择可能会导致不同类型的流量之间的竞争行为。

  • paper_url: http://arxiv.org/abs/2309.07162
  • repo_url: None
  • paper_authors: Tanay Rastogi, Michele D. Simoni, Anders Karlström
    for: 这个论文的目的是提出一种基于摄像头数据的交通状态估算方法,以估算交通conditions based on partially observed data using prior knowledge of traffic patterns。methods: 该方法使用了多个运动摄像头的数据,将其组合成时空图,并使用Cell Transmission Model (CTM)和生化算法优化相应的参数和边界条件,以实现准确的交通状态估算。results: 在使用SUMO交通模拟器生成的 simulate traffic data 上进行测试,该方法可以达到低的root mean square error (RMSE)值0.0079 veh/m,与其他CTM-based方法相当。
    Abstract Traffic State Estimation (TSE) is the process of inferring traffic conditions based on partially observed data using prior knowledge of traffic patterns. The type of input data used has a significant impact on the accuracy and methodology of TSE. Traditional TSE methods have relied on data from either stationary sensors like loop detectors or mobile sensors such as GPS-equipped floating cars. However, both approaches have their limitations. This paper proposes a method for estimating traffic states on a road link using vehicle trajectories obtained from cameras mounted on moving vehicles. It involves combining data from multiple moving cameras to construct time-space diagrams and using them to estimate parameters for the link's fundamental diagram (FD) and densities in unobserved regions of space-time. The Cell Transmission Model (CTM) is utilized in conjunction with a Genetic Algorithm (GA) to optimize the FD parameters and boundary conditions necessary for accurate estimation. To evaluate the effectiveness of the proposed methodology, simulated traffic data generated by the SUMO traffic simulator was employed incorporating 140 different space-time diagrams with varying lane density and speed. The evaluation of the simulated data demonstrates the effectiveness of the proposed approach, as it achieves a low root mean square error (RMSE) value of 0.0079 veh/m and is comparable to other CTM-based methods. In conclusion, the proposed TSE method opens new avenues for the estimation of traffic state using an innovative data collection method that uses vehicle trajectories collected from on-board cameras.
    摘要 traffic 状态估计 (TSE) 是根据部分观察数据进行交通条件的推断,使用交通模式的先前知识。传统的 TSE 方法使用了stationary 传感器(如 loop detectors)或移动传感器(如 GPS 装备的浮动车辆)的数据。然而,两种方法都有其局限性。这篇论文提出了使用 mounted 在移动车辆上的相机获取车辆轨迹来估计交通状态的方法。该方法通过将多个移动相机的数据组合成时空图,并使用时空图来估计链接的基本图ogram(FD)和未观察区域的密度。使用 cel 传输模型(CTM)和遗传算法(GA)优化 FD 参数和边界条件,以实现准确的估计。为评估提案的效果,使用 SUMO 交通模拟器生成的 simulated 交通数据,包括140个不同的时空图,具有不同的车道密度和速度。对 simulated 数据进行评估,显示了提案的效果,其Root Mean Square Error(RMSE)值为0.0079 veh/m,与其他 CTM 基于的方法相当。结论,提案的 TSE 方法开启了新的途径,使用 innovative 的数据收集方法,使用 mounted 在移动车辆上的相机收集车辆轨迹来估计交通状态。

ECG-based estimation of respiratory modulation of AV nodal conduction during atrial fibrillation

  • paper_url: http://arxiv.org/abs/2309.05458
  • repo_url: https://github.com/plappertf/ecg-based_estimation_of_respiratory_modulation_of_av_nodal_conduction_during_atrial_fibrillation
  • paper_authors: Felix Plappert, Gunnar Engström, Pyotr G. Platonov, Mikael Wallman, Frida Sandberg
  • for: 本研究旨在提供一种基于ECG的呼吸调控评估方法,以便为个性化的心律失常(AF)治疗提供更多信息。
  • methods: 本研究使用1维 convolutional neural network(1D-CNN)来估计ECG中呼吸调控的AV节间步延迟和传导延迟的呼吸模ulation。首先使用一种网络模型来生成仿真的ECG数据,然后使用1D-CNN来分析临床深呼吸测试数据中的呼吸模ulation。
  • results: 研究表明,使用ECG中的呼吸信号可以对AV节间步延迟和传导延迟进行呼吸调控评估,并且可以通过添加呼吸信号、AFR或两者来提高预测的准确性。在临床数据中,研究发现呼吸模ulation在深呼吸测试中的变化具有大量 между patient variability。
    Abstract Information about autonomic nervous system (ANS) activity may be valuable for personalized atrial fibrillation (AF) treatment but is not easily accessible from the ECG. In this study, we propose a new approach for ECG-based assessment of respiratory modulation in AV nodal refractory period and conduction delay. A 1-dimensional convolutional neural network (1D-CNN) was trained to estimate respiratory modulation of AV nodal conduction properties from 1-minute segments of RR series, respiration signals, and atrial fibrillatory rates (AFR) using synthetic data that replicates clinical ECG-derived data. The synthetic data were generated using a network model of the AV node and 4 million unique model parameter sets. The 1D-CNN was then used to analyze respiratory modulation in clinical deep breathing test data of 28 patients in AF, where a ECG-derived respiration signal was extracted using a novel approach based on periodic component analysis. We demonstrated using synthetic data that the 1D-CNN can predict the respiratory modulation from RR series alone ($\rho$ = 0.805) and that the addition of either respiration signal ($\rho$ = 0.830), AFR ($\rho$ = 0.837), or both ($\rho$ = 0.855) improves the prediction. Results from analysis of clinical ECG data of 20 patients with sufficient signal quality suggest that respiratory modulation decreased in response to deep breathing for five patients, increased for five patients, and remained similar for ten patients, indicating a large inter-patient variability.
    摘要 信息关于自动神经系统(ANS)活动可能对个人化脉动性心律疾病(AF)治疗有价值,但是不容易从电压ogram(ECG)中获取。在本研究中,我们提出了一种新的方法来使用ECG来评估呼吸功能的影响。我们使用一维 convolutional neural network(1D-CNN)来估算呼吸功能对AV节点的储备期和传导延迟的影响。我们使用了一个网络模型来生成Synthetic data,并使用400万个独特参数集来生成仿真数据。然后,我们使用1D-CNN来分析临床深呼吸测试数据中的呼吸功能。我们在Synthetic data中示出了1D-CNN可以从RR序列中预测呼吸功能(ρ = 0.805),并且在添加呼吸信号、AFR(脉动率)或者两者时,预测的精度都会提高(ρ = 0.830、0.837、0.855)。在临床ECG数据中,我们对20名患有AF的患者进行分析,结果显示,呼吸功能对深呼吸的应对有很大的个体差异。

Opinion Dynamics in Two-Step Process: Message Sources, Opinion Leaders and Normal Agents

  • paper_url: http://arxiv.org/abs/2309.05370
  • repo_url: None
  • paper_authors: Huisheng Wang, Yuejiang Li, Yiqing Lin, H. Vicky Zhao
  • for: 本研究旨在探讨社交网络中意见的传播和演化,以及两步过程中消息源、意见领袖和普通代理的交互。
  • methods: 本研究提出了一个统一框架,称为两步模型,用于分析消息传递过程中的意见演化。研究者通过分析平衡状态的意见和稳定性,探讨了各种因素对意见的影响。
  • results: 研究发现,消息分布、初始意见、坚持度和偏好系数等因素都会影响平衡状态下的意见的样本均值和方差。同时, normal agents的意见往往受到意见领袖的影响。研究者还进行了数值和社会实验,并发现两步模型在平均上表现较好。这些结果为社交网络中意见的形成和导航提供了有价值的洞察和指导。
    Abstract According to mass media theory, the dissemination of messages and the evolution of opinions in social networks follow a two-step process. First, opinion leaders receive the message from the message sources, and then they transmit their opinions to normal agents. However, most opinion models only consider the evolution of opinions within a single network, which fails to capture the two-step process accurately. To address this limitation, we propose a unified framework called the Two-Step Model, which analyzes the communication process among message sources, opinion leaders, and normal agents. In this study, we examine the steady-state opinions and stability of the Two-Step Model. Our findings reveal that several factors, such as message distribution, initial opinion, level of stubbornness, and preference coefficient, influence the sample mean and variance of steady-state opinions. Notably, normal agents' opinions tend to be influenced by opinion leaders in the two-step process. We also conduct numerical and social experiments to validate the accuracy of the Two-Step Model, which outperforms other models on average. Our results provide valuable insights into the factors that shape social opinions and can guide the development of effective strategies for opinion guidance in social networks.
    摘要 In this study, we investigate the steady-state opinions and stability of the Two-Step Model. Our findings show that several factors, such as message distribution, initial opinion, level of stubbornness, and preference coefficient, affect the sample mean and variance of steady-state opinions. Notably, normal agents' opinions are influenced by opinion leaders in the two-step process.To validate the accuracy of the Two-Step Model, we conduct numerical and social experiments. Our results show that the Two-Step Model outperforms other models on average. Our findings provide valuable insights into the factors that shape social opinions and can guide the development of effective strategies for opinion guidance in social networks.

Low Peak-to-Average Power Ratio FBMC-OQAM System based on Data Mapping and DFT Precoding

  • paper_url: http://arxiv.org/abs/2309.05278
  • repo_url: None
  • paper_authors: Liming Li, Liqin Ding, Yang Wang, Jiliang Zhang
  • for: 提高 spectrum 的 flexible 使用
  • methods: 使用 conjugate symmetry rule 和 DFT 编码
  • results: 实现更好的 PAPR 减少,但与 prototype filter 的效果存在负面影响
    Abstract Filter bank multicarrier with offset quadrature amplitude modulation (FBMC-OQAM) is an alternative to OFDM for enhanced spectrum flexible usage. To reduce the peak-to-average power ratio (PAPR), DFT spreading is usually adopted in OFDM systems. However, in FBMC-OQAM systems, because the OQAM pre-processing splits the spread data into the real and imaginary parts, the DFT spreading can result in only marginal PAPR reduction. This letter proposes a novel map-DFT-spread FBMC-OQAM scheme. In this scheme, the transmitting data symbols are first mapped with a conjugate symmetry rule and then coded by the DFT. According to this method, the OQAM pre-processing can be avoided. Compared with the simple DFT-spread scheme, the proposed scheme achieves a better PAPR reduction. In addition, the effect of the prototype filter on the PAPR is studied via numerical simulation and a trade-off exists between the PAPR and out-of-band performances.
    摘要 Filter bank multicarrier with offset quadrature amplitude modulation (FBMC-OQAM) 是一种可以增强频率 Versatile 使用的替代品,而不是 OFDM。在 OFDM 系统中,通常采用 DFT 扩展来减少峰峰与平均功率比(PAPR)。但在 FBMC-OQAM 系统中,由于 OQAM 预处理将讯号分成实部和虚部,因此 DFT 扩展仅能实现有限的 PAPR 减少。这封信函数 propose 一种新的 map-DFT-spread FBMC-OQAM 方案。在这个方案中,传输讯号 симвоル是首先使用 conjugate symmetry rule 映射,然后被 DFT 编码。根据这种方法,OQAM 预处理可以避免。相比较简单的 DFT-spread 方案,提议的方案可以更好地减少 PAPR。此外,透过数字 simulations 的研究发现,试验filter 对 PAPR 的影响存在贸易,而且在 PAPR 和外带性能之间存在贸易。

Deep photonic reservoir computing recurrent network

  • paper_url: http://arxiv.org/abs/2309.05246
  • repo_url: None
  • paper_authors: Cheng Wang
  • for: 解决现实世界复杂任务
  • methods: 光学储存器计算(PRC)架构,包括4层隐藏层和320个相互连接的神经元
  • results: 在光纤通信系统中实现了强大的非线性补偿功能
    Abstract Deep neural networks usually process information through multiple hidden layers. However, most hardware reservoir computing recurrent networks only have one hidden reservoir layer, which significantly limits the capability of solving real-world complex tasks. Here we show a deep photonic reservoir computing (PRC) architecture, which is constructed by cascading injection-locked semiconductor lasers. In particular, the connection between successive hidden layers is all optical, without any optical-electrical conversion or analog-digital conversion. The proof of concept is demonstrated on a PRC consisting of 4 hidden layers and 320 interconnected neurons. In addition, we apply the deep PRC in the real-world signal equalization of an optical fiber communication system. It is found that the deep PRC owns strong ability to compensate the nonlinearity of fibers.
    摘要 深度神经网络通常通过多层隐藏层处理信息。然而,大多数硬件液体计算回卷网络只有一层隐藏层,这限制了解决实际世界复杂任务的能力。我们在这里介绍了深度光学液体计算(PRC)架构,该架构由锁定射频激光器串联而成。具体来说,连续隐藏层之间的连接都是光学连接,没有光电转换或杂化数字转换。我们在一个包含4个隐藏层和320个相互连接的 neuron 的 PRC 中进行了证明。此外,我们还应用了深度 PRC 在光纤通信系统中的实际信号平衡。发现深度 PRC 具有强大的补偿光纤非线性能力。

Joint Beamforming and Compression Design for Per-Antenna Power Constrained Cooperative Cellular Networks

  • paper_url: http://arxiv.org/abs/2309.05226
  • repo_url: None
  • paper_authors: Xilai Fan, Ya-Feng Liu, Bo Jiang
  • for: 这个论文关注的是协同单元网络中 relay-like 基站与中央处理器(CP)之间的缓冲限制的 JOINT 扩展和压缩问题。
  • methods: 作者首先确定了考虑问题和其半definite 逼近(SDR)问题的等价性。然后,他们 derive了 SDR 问题的副本Lagrangian dual问题的 partial 对偶函数,并证明了该对偶函数的对数 diferenciable。基于这个 diferenciability,作者提出了两种高效的投影升降算法,即投影精确升降算法(PEGA)和投影不精确升降算法(PIGA)。
  • results: 作者通过数值实验表明了 globally optimal 和高效的性能。
    Abstract In the cooperative cellular network, relay-like base stations are connected to the central processor (CP) via rate-limited fronthaul links and the joint processing is performed at the CP, which thus can effectively mitigate the multiuser interference. In this paper, we consider the joint beamforming and compression problem with per-antenna power constraints in the cooperative cellular network. We first establish the equivalence between the considered problem and its semidefinite relaxation (SDR). Then we further derive the partial Lagrangian dual of the SDR problem and show that the objective function of the obtained dual problem is differentiable. Based on the differentiability, we propose two efficient projected gradient ascent algorithms for solving the dual problem, which are projected exact gradient ascent (PEGA) and projected inexact gradient ascent (PIGA). While PEGA is guaranteed to find the global solution of the dual problem (and hence the global solution of the original problem), PIGA is more computationally efficient due to the lower complexity in inexactly computing the gradient. Global optimality and high efficiency of the proposed algorithms are demonstrated via numerical experiments.
    摘要 在合作mobile network中,关键基站是通过rate-limited前方链接到中央处理器(CP),并在CP上进行共同处理,从而可以有效地减少多用户干扰。在这篇论文中,我们考虑了共同扫描和压缩问题,并受到每个天线的功率限制。我们首先证明了这个问题和其半definite relaxation(SDR)问题的等价性。然后我们derived the partial Lagrangian dual of the SDR problem,并证明了其目标函数的导数ifferentiability。基于导数ifferentiability,我们提出了两种高效的 проекed gradient ascent算法,分别是exact projected gradient ascent(PEGA)和inexact projected gradient ascent(PIGA)。而PEGA是保证找到全局解的,而PIGA是因为计算gradient的复杂度较低,因此更加具有计算效率。我们通过数学实验证明了全球优化和高效性。

Quaternion MLP Neural Networks Based on the Maximum Correntropy Criterion

  • paper_url: http://arxiv.org/abs/2309.05208
  • repo_url: None
  • paper_authors: Gang Wang, Xinyu Tian, Zuxuan Zhang
  • for: 这篇论文是为了提出一种梯度升降算法,用于多层感知器(MLP)网络,并且使用最大对应度假设(MCC)的成本函数。
  • methods: 这篇论文使用了Split点旋转函数,基于对应度矩阵的对应度梯度。它首先将早期点旋转单层感知器算法重新写作为一个新的点旋转算法。其次,它提出了基于MSE成本函数的梯度下降算法,并且将其扩展到MCC成本函数。
  • results: simulations 显示了提案的方法的可行性。
    Abstract We propose a gradient ascent algorithm for quaternion multilayer perceptron (MLP) networks based on the cost function of the maximum correntropy criterion (MCC). In the algorithm, we use the split quaternion activation function based on the generalized Hamilton-real quaternion gradient. By introducing a new quaternion operator, we first rewrite the early quaternion single layer perceptron algorithm. Secondly, we propose a gradient descent algorithm for quaternion multilayer perceptron based on the cost function of the mean square error (MSE). Finally, the MSE algorithm is extended to the MCC algorithm. Simulations show the feasibility of the proposed method.
    摘要 我们提出了一种梯度升降算法,用于基于最大幂函数 критериion(MCC)的四元数多层感知网络(MLP)。在算法中,我们使用基于总体 Hamilton-real四元数 gradient的Split四元数活化函数。首先,我们通过引入一个新的四元数操作符,将早期四元数单层感知算法重新写作。其次,我们提出了基于成本函数 Mean Square Error(MSE)的梯度下降算法。最后,MSE算法被扩展到MCC算法。模拟结果表明提出的方法的可行性。Here's the breakdown of the translation:* 我们提出了一种梯度升降算法 (We propose a gradient ascent algorithm)* 用于基于最大幂函数 критериion (MCC) (based on the maximum correntropy criterion)* 四元数多层感知网络 (MLP) (quaternion multilayer perceptron)* 在算法中,我们使用 (In the algorithm, we use) + 基于总体 Hamilton-real四元数 gradient (Split quaternion activation function based on the generalized Hamilton-real quaternion gradient) + 一个新的四元数操作符 (a new quaternion operator)* 首先 (First) + 我们通过引入 (We introduce) - 早期四元数单层感知算法 (early quaternion single layer perceptron algorithm) + 然后 (Then) - 我们提出了 (We propose) - 基于成本函数 Mean Square Error (MSE) 的梯度下降算法 (a gradient descent algorithm based on the cost function of Mean Square Error) + 最后 (Finally) - MSE算法被扩展到MCC算法 (MSE algorithm is extended to the MCC algorithm)* 模拟结果表明 (Simulation results show) + 提出的方法的可行性 (the feasibility of the proposed method)

A Review of the Applications of Quantum Machine Learning in Optical Communication Systems

  • paper_url: http://arxiv.org/abs/2309.05205
  • repo_url: None
  • paper_authors: Ark Modi, Alonso Viladomat Jasso, Roberto Ferrara, Christian Deppe, Janis Noetzel, Fred Fung, Maximilian Schaedler
  • for: 这篇论文主要为什么写的?
  • methods: 这篇论文使用了哪些方法?
  • results: 这篇论文得到了哪些结果?Here are the answers, in Simplified Chinese:
  • for: 这篇论文主要为了探讨Quantum和量子静默学习算法在光学信号处理中的应用。
  • methods: 这篇论文使用了各种提议的Quantum和量子静默学习算法,包括量子回归、量子矩阵分解等。
  • results: 这篇论文评论了这些算法在现有技术下的可行性和应用前景。
    Abstract In the context of optical signal processing, quantum and quantum-inspired machine learning algorithms have massive potential for deployment. One of the applications is in error correction protocols for the received noisy signals. In some scenarios, non-linear and unknown errors can lead to noise that bypasses linear error correction protocols that optical receivers generally implement. In those cases, machine learning techniques are used to recover the transmitted signal from the received signal through various estimation procedures. Since quantum machine learning algorithms promise advantage over classical algorithms, we expect that optical signal processing can benefit from these advantages. In this review, we survey several proposed quantum and quantum-inspired machine learning algorithms and their applicability with current technology to optical signal processing.
    摘要 在光学信号处理中,量子机器学习算法有巨大的应用潜力。其中一个应用是在接收到含噪信号后使用机器学习技术来重新计算传输信号。在某些情况下,非线性和未知的错误可能会超越线性错误修复协议,那么机器学习技术可以用来从接收到的信号中提取原始传输信号。由于量子机器学习算法能够提供优势,因此我们期望光学信号处理可以从这些优势中受益。在本文中,我们评论了一些提议的量子和量子启发式机器学习算法,以及它们与当前技术的相互应用。