eess.SP - 2023-09-16

Optimal Photodetector Size for High-Speed Free-Space Optics Receivers

  • paper_url: http://arxiv.org/abs/2309.09090
  • repo_url: None
  • paper_authors: Muhammad Salman Bashir, Qasim Zeeshan Ahmed, Mohamed-Slim Alouini
  • for: 优化光电器面积以实现高速度数据传输
  • methods: 使用closed-form解题法优化光电器面积,以 maximize通道容量
  • results: 实现了各种光无线通信系统的最大可达数据速率,包括长距离深空光链和短距离室内可见光通信系统
    Abstract The selection of an optimal photodetector area is closely linked to the attainment of higher data rates in optical wireless communication receivers. If the photodetector area is too large, the channel capacity degrades due to lower modulation bandwidth of the detector. A smaller photodetector maximizes the bandwidth, but minimizes the captured signal power and the subsequent signal-to-noise ratio. Therein lies an opportunity in this trade-off to maximize the channel rate by choosing the optimal photodetector area. In this study, we have optimized the photodetector area in order to maximize the channel capacity of a free-space optical link for a diverse set of communication scenarios. We believe that the study in this paper in general -- and the closed-form solutions derived in this study in particular -- will be helpful to maximize achievable data rates of a wide gamut of optical wireless communication systems: from long range deep space optical links to short range indoor visible light communication systems.
    摘要 选择最佳光探测面积对于光无线通信接收器的数据速率的实现具有紧密的关系。如果光探测面积太大,通道容量会降低因为探测器的模拟宽频率下降。小型光探测器可以最大化宽频率,但是它会降低捕获信号强度和相应的噪声比。这里就存在一个利点,通过选择最佳光探测面积可以最大化通道容量。在这项研究中,我们对光无线通信系统中的多种通信场景进行了优化光探测面积,以实现最大化通道容量。我们认为这项研究的总体成果以及 derive的关闭形解决方案会对各种光无线通信系统中的数据速率帮助提高。从深空光学链到indoor可见光通信系统,我们认为这项研究将对各种系统的实现数据速率具有帮助。

Split Federated Learning for 6G Enabled-Networks: Requirements, Challenges and Future Directions

  • paper_url: http://arxiv.org/abs/2309.09086
  • repo_url: None
  • paper_authors: Houda Hafi, Bouziane Brik, Pantelis A. Frangoudis, Adlen Ksentini
  • for: This paper is written to explore the potential of Split Federated Learning (SFL) in 6G wireless networks and its applications in various use cases.
  • methods: The paper uses a comprehensive study of SFL techniques and their deployment over 6G wireless networks, including an overview of three emerging collaborative learning paradigms and their comparison with existing approaches.
  • results: The paper highlights the need for SFL in 6G networks and its potential benefits in improving data privacy and reducing communication overhead, and identifies key technical challenges and future research directions in this area.Here is the same information in Simplified Chinese text:
  • for: 这篇论文是为了探讨6G无线网络中使用Split Federated Learning(SFL)的潜力和其在不同应用场景中的运用。
  • methods: 论文采用了一项全面的SFL技术研究和其在6G无线网络上的部署,包括三种出现的共同学习方法的概述和与现有方法的比较。
  • results: 论文强调了SFL在6G网络中的需求和其可能提高数据隐私和通信负担的优点,并标识了这个领域的关键技术挑战和未来研究方向。
    Abstract Sixth-generation (6G) networks anticipate intelligently supporting a wide range of smart services and innovative applications. Such a context urges a heavy usage of Machine Learning (ML) techniques, particularly Deep Learning (DL), to foster innovation and ease the deployment of intelligent network functions/operations, which are able to fulfill the various requirements of the envisioned 6G services. Specifically, collaborative ML/DL consists of deploying a set of distributed agents that collaboratively train learning models without sharing their data, thus improving data privacy and reducing the time/communication overhead. This work provides a comprehensive study on how collaborative learning can be effectively deployed over 6G wireless networks. In particular, our study focuses on Split Federated Learning (SFL), a technique recently emerged promising better performance compared with existing collaborative learning approaches. We first provide an overview of three emerging collaborative learning paradigms, including federated learning, split learning, and split federated learning, as well as of 6G networks along with their main vision and timeline of key developments. We then highlight the need for split federated learning towards the upcoming 6G networks in every aspect, including 6G technologies (e.g., intelligent physical layer, intelligent edge computing, zero-touch network management, intelligent resource management) and 6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous systems). Furthermore, we review existing datasets along with frameworks that can help in implementing SFL for 6G networks. We finally identify key technical challenges, open issues, and future research directions related to SFL-enabled 6G networks.
    摘要 We first provide an overview of three emerging collaborative learning paradigms, including federated learning, split learning, and split federated learning, as well as an overview of 6G networks along with their main vision and timeline of key developments. We then highlight the need for split federated learning towards the upcoming 6G networks in every aspect, including 6G technologies (e.g., intelligent physical layer, intelligent edge computing, zero-touch network management, intelligent resource management) and 6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous systems). Furthermore, we review existing datasets along with frameworks that can help in implementing SFL for 6G networks.We finally identify key technical challenges, open issues, and future research directions related to SFL-enabled 6G networks. These include the need for better privacy guarantees, more efficient communication protocols, and better handling of non-iid data distributions. Additionally, there is a need for more research on the intersection of SFL and other emerging technologies such as edge computing, blockchain, and quantum computing.In summary, this work provides a comprehensive study on the potential of SFL for 6G networks, highlighting its benefits, challenges, and future research directions. The findings of this study can help researchers and practitioners to better understand the potential of SFL in 6G networks and to develop innovative solutions that can leverage the advantages of collaborative learning while ensuring data privacy and reducing communication overhead.

Blind Deconvolution of Sparse Graph Signals in the Presence of Perturbations

  • paper_url: http://arxiv.org/abs/2309.09063
  • repo_url: None
  • paper_authors: Victor M. Tenorio, Samuel Rey, Antonio G. Marques
  • for: 解压缩图像信号,以获取输入(源)和图像扩散过程中的筛选器(模型)。
  • methods: 提议使用优化算法来解压缩图像信号,并考虑图像扩散过程中的瑕疵。
  • results: 预liminary numerical experiments表明,提议的算法可以有效地解压缩图像信号。
    Abstract Blind deconvolution over graphs involves using (observed) output graph signals to obtain both the inputs (sources) as well as the filter that drives (models) the graph diffusion process. This is an ill-posed problem that requires additional assumptions, such as the sources being sparse, to be solvable. This paper addresses the blind deconvolution problem in the presence of imperfect graph information, where the observed graph is a perturbed version of the (unknown) true graph. While not having perfect knowledge of the graph is arguably more the norm than the exception, the body of literature on this topic is relatively small. This is partly due to the fact that translating the uncertainty about the graph topology to standard graph signal processing tools (e.g. eigenvectors or polynomials of the graph) is a challenging endeavor. To address this limitation, we propose an optimization-based estimator that solves the blind identification in the vertex domain, aims at estimating the inverse of the generating filter, and accounts explicitly for additive graph perturbations. Preliminary numerical experiments showcase the effectiveness and potential of the proposed algorithm.
    摘要 盲损减殖在图上 involve 使用(观察)输出图像信号来获得输入(源)以及驱动图像扩散过程的筛选器。这是一个不充分定义的问题,需要更多的假设,如源是稀疏的,才能解决。本文 Addresses 盲损减殖问题在不完整的图像信息下,其中观察到的图像是真实图像未知的 perturbed 版本。尽管不具备完整的图像信息是更常见的情况,但相关的文献较少,这可能是因为将图像 topology 不确定性翻译到标准的图像处理工具(例如对角线或图像权值)是一个困难的任务。为解决这些限制,我们提出了一种优化基于的估计器,解决盲损减殖问题在顶点域,计算出生成器的逆,并考虑到添加性的图像杂化。初步的数字实验显示了提案的算法的有效性和潜力。

A Low-Latency FFT-IFFT Cascade Architecture

  • paper_url: http://arxiv.org/abs/2309.09035
  • repo_url: None
  • paper_authors: Keshab K. Parhi
  • for: 这篇论文描述了一种不需要中间缓冲的幂等快速傅立卷-快速傅立卷架构的设计。
  • methods: 该架构使用叠加来实现部分并行的FFT和IFFT架构。通过不同的叠加集来设计FFT和IFFT架构,但是对于给定的叠加FFT架构,存在一个唯一的叠加集来设计IFFT架构,无需中间缓冲。
  • results: 该方法可以避免中间缓冲,降低延迟和释放内存空间。此外,该方法还可以扩展到多通道时间序列的并行处理。相比一个具有相同叠加集的设计,该架构可以节省约N/2个存储元素和N/4个时钟周期的延迟。对于2个扩展FFT-IFFT架构,则分别节省约N/2个存储元素和N/2个时钟周期的延迟。
    Abstract This paper addresses the design of a partly-parallel cascaded FFT-IFFT architecture that does not require any intermediate buffer. Folding can be used to design partly-parallel architectures for FFT and IFFT. While many cascaded FFT-IFFT architectures can be designed using various folding sets for the FFT and the IFFT, for a specified folded FFT architecture, there exists a unique folding set to design the IFFT architecture that does not require an intermediate buffer. Such a folding set is designed by processing the output of the FFT as soon as possible (ASAP) in the folded IFFT. Elimination of the intermediate buffer reduces latency and saves area. The proposed approach is also extended to interleaved processing of multi-channel time-series. The proposed FFT-IFFT cascade architecture saves about N/2 memory elements and N/4 clock cycles of latency compared to a design with identical folding sets. For the 2-interleaved FFT-IFFT cascade, the memory and latency savings are, respectively, N/2 units and N/2 clock cycles, compared to a design with identical folding sets.
    摘要

Localization with Noisy Android Raw GNSS Measurements

  • paper_url: http://arxiv.org/abs/2309.08936
  • repo_url: None
  • paper_authors: Xu Weng, Keck Voon Ling
  • for: 本研究旨在利用AndroidRaw全球导航卫星系统(GNSS)测量来进行高精度定位任务,传统上由特殊GNSS接收器进行。
  • methods: 本研究使用Moveing Horizon Estimation(MHE)、Extended Kalman Filter(EKF)和Rauch-Tung-Striebel(RTS)缓和器来抑制噪声。
  • results: 实验结果显示,RTS缓和器可以实现最佳的定位性能,在静止和动态情况下对应位置误差降低76.4%和46.5%,相比基准weighted least squares(WLS)方法。
    Abstract Android raw Global Navigation Satellite System (GNSS) measurements are expected to bring power to take on demanding localization tasks that are traditionally performed by specialized GNSS receivers. The hardware constraints, however, make Android raw GNSS measurements much noisier than geodetic-quality ones. This study elucidates the principles of localization using Android raw GNSS measurements and leverages Moving Horizon Estimation (MHE), Extended Kalman Filter (EKF), and Rauch-Tung-Striebel (RTS) smoother for noise suppression. The experiment results showcase that RTS smoother achieves the best localization performance and yields a remarkable reduction of 76.4\% and 46.5\% in horizontal positioning error during static and dynamic scenarios compared to the baseline weighted least squares (WLS) method.
    摘要

Scalable Multiuser Immersive Communications with Multi-numerology and Mini-slot

  • paper_url: http://arxiv.org/abs/2309.08906
  • repo_url: None
  • paper_authors: Ming Hu, Jiazhi Peng, Lifeng Wang, Kai-Kit Wong
  • for: 这个论文是为了研究多用户 immerse 通信网络,在这些网络中不同的用户设备可能需要不同的扩展现实(XR)服务。
  • methods: 该论文提出了一种可扩展的时间频率资源分配方法,基于多 numerology 和 mini-slot。
  • results: 该方法可以有效地提高多用户 immerse 通信网络中的总体品质经验(QoE),并且可以适应不同用户的 QoE 限制。
    Abstract This paper studies multiuser immersive communications networks in which different user equipment may demand various extended reality (XR) services. In such heterogeneous networks, time-frequency resource allocation needs to be more adaptive since XR services are usually multi-modal and latency-sensitive. To this end, we develop a scalable time-frequency resource allocation method based on multi-numerology and mini-slot. To appropriately determining the discrete parameters of multi-numerology and mini-slot for multiuser immersive communications, the proposed method first presents a novel flexible time-frequency resource block configuration, then it leverages the deep reinforcement learning to maximize the total quality-of-experience (QoE) under different users' QoE constraints. The results confirm the efficiency and scalability of the proposed time-frequency resource allocation method.
    摘要 To determine the appropriate discrete parameters of multi-numerology and mini-slot for multi-user immersive communications, the proposed method begins by presenting a flexible time-frequency resource block configuration. Then, it utilizes deep reinforcement learning to maximize the total quality-of-experience (QoE) while meeting the different users' QoE constraints.The results demonstrate the efficiency and scalability of the proposed time-frequency resource allocation method.

CDDM: Channel Denoising Diffusion Models for Wireless Semantic Communications

  • paper_url: http://arxiv.org/abs/2309.08895
  • repo_url: None
  • paper_authors: Tong Wu, Zhiyong Chen, Dazhi He, Liang Qian, Yin Xu, Meixia Tao, Wenjun Zhang
    for: 这篇论文主要目的是提出一种新的物理层模块——渠道减噪扩散模型(CDDM),用于semantic通信系统中减噪。methods: 该论文使用了扩散模型(DM),特别是针对渠道模型的扩散进行特定的设计,以及针对渠道模型的特殊化采样和训练算法。results: 实验结果表明,CDDM可以减少接收信号的条件熵,并且在小步骤下可以有效减少MSE。此外,joint CDDM和JSCC系统在图像传输中表现更好,并且比JSCC系统和传统的JPEG2000与LDPC编码方法更好。
    Abstract Diffusion models (DM) can gradually learn to remove noise, which have been widely used in artificial intelligence generated content (AIGC) in recent years. The property of DM for eliminating noise leads us to wonder whether DM can be applied to wireless communications to help the receiver mitigate the channel noise. To address this, we propose channel denoising diffusion models (CDDM) for semantic communications over wireless channels in this paper. CDDM can be applied as a new physical layer module after the channel equalization to learn the distribution of the channel input signal, and then utilizes this learned knowledge to remove the channel noise. We derive corresponding training and sampling algorithms of CDDM according to the forward diffusion process specially designed to adapt the channel models and theoretically prove that the well-trained CDDM can effectively reduce the conditional entropy of the received signal under small sampling steps. Moreover, we apply CDDM to a semantic communications system based on joint source-channel coding (JSCC) for image transmission. Extensive experimental results demonstrate that CDDM can further reduce the mean square error (MSE) after minimum mean square error (MMSE) equalizer, and the joint CDDM and JSCC system achieves better performance than the JSCC system and the traditional JPEG2000 with low-density parity-check (LDPC) code approach.
    摘要 Diffusion models (DM) 可以慢慢地学习去除噪音,已经广泛应用于人工智能生成内容 (AIGC) 领域的最新技术。DM 对于通信频率 Canal 的噪音 mitigation 提供了一个可能性,因此我们在本文中提出了通道减噪扩散模型 (CDDM)。CDDM 可以作为通信物理层模块,在通道均衡后学习通道输入信号的分布,然后利用这些学习知识来去除通道噪音。我们提出了对应的训练和采样算法,并经过特殊设计的前进扩散过程来适应通道模型,并理论上证明了充分训练 CDDM 可以在小步骤下降低接收信号的 conditional entropy。此外,我们应用 CDDM 到基于联合源-通道编码 (JSCC) 的图像传输系统中。实验结果表明,CDDM 可以在 MMSE 等式后进行加权平均值补做,并且联合 CDDM 和 JSCC 系统可以在 JSCC 系统和传统的 JPEG2000 低密度极性码 (LDPC) 方法之上具有更好的性能。

Demo: Intelligent Radar Detection in CBRS Band in the Colosseum Wireless Network Emulator

  • paper_url: http://arxiv.org/abs/2309.08861
  • repo_url: None
  • paper_authors: Davide Villa, Daniel Uvaydov, Leonardo Bonati, Pedram Johari, Josep Miquel Jornet, Tommaso Melodia
  • for: 这个论文是为了研究商业激光波形与无线网络共同运行的技术。
  • methods: 这个研究使用了Colosseum,全球最大的无线网络模拟器,以及硬件在回路的技术来模拟实际的无线网络环境。
  • results: 实验结果显示,使用机器学习代理人在基站中训练时,可以实现88%的检测精度,检测时间为137ms。
    Abstract The ever-growing number of wireless communication devices and technologies demands spectrum-sharing techniques. Effective coexistence management is crucial to avoid harmful interference, especially with critical systems like nautical and aerial radars in which incumbent radios operate mission-critical communication links. In this demo, we showcase a framework that leverages Colosseum, the world's largest wireless network emulator with hardware-in-the-loop, as a playground to study commercial radar waveforms coexisting with a cellular network in CBRS band in complex environments. We create an ad-hoc high-fidelity spectrum-sharing scenario for this purpose. We deploy a cellular network to collect IQ samples with the aim of training an ML agent that runs at the base station. The agent has the goal of detecting incumbent radar transmissions and vacating the cellular bandwidth to avoid interfering with the radar operations. Our experiment results show an average detection accuracy of 88%, with an average detection time of 137 ms.
    摘要 随着无线通信设备和技术的不断增加,需要 spectrum-sharing 技术来实现共享频率。有效地管理共享是关键,以避免干扰,特别是与航空和海上雷达系统相关的核心通信链接。在这个 demo 中,我们利用 Colosseum,全球最大的无线网络模拟器,作为一个实验室,研究商业雷达波形在 CBRS 频级上与无线网络共享资源的可行性。我们创建了一个高精度的 spectrum-sharing enario,并将一个 cellular 网络部署到收集 IQ 样本,以用于训练基站上运行的机器学习代理。这个代理的目标是检测 incumbent 雷达传输,并让 cellular 频率占用避免与雷达操作干扰。我们的实验结果显示,检测精度平均为 88%,检测时间平均为 137 ms。