results: 通过采用多代理深度学习和凸优化算法,实现了在UCMEC中的任务卸载、电源控制和计算资源分配的分布式优化。实验结果表明,相比传统的Cellular-based MEC,UCMEC可以提高上升传输率最多343.56%,并将长期平均总延迟降低最多45.57%。Abstract
In the traditional cellular-based mobile edge computing (MEC), users at the edge of the cell are prone to suffer severe inter-cell interference and signal attenuation, leading to low throughput even transmission interruptions. Such edge effect severely obstructs offloading of tasks to MEC servers. To address this issue, we propose user-centric mobile edge computing (UCMEC), a novel MEC architecture integrating user-centric transmission, which can ensure high throughput and reliable communication for task offloading. Then, we formulate an optimization problem with joint consideration of task offloading, power control, and computing resource allocation in UCMEC, aiming at obtaining the optimal performance in terms of long-term average total delay. To solve the intractable problem, we propose two decentralized joint optimization schemes based on multi-agent deep reinforcement learning (MADRL) and convex optimization, which consider both cooperation and non-cooperation among network nodes. Simulation results demonstrate that the proposed schemes in UCMEC can significantly improve the uplink transmission rate by at most 343.56% and reduce the long-term average total delay by at most 45.57% compared to traditional cellular-based MEC.
摘要
在传统的mobile edge computing(MEC)中,用户位于Edge的位置会受到严重的同 cell干扰和信号强度减弱,导致吞吐量低下,甚至是传输中断。这种边缘效应妨碍了任务卸载到MEC服务器。为解决这个问题,我们提出了用户中心的mobile edge computing(UCMEC),一种新的MEC架构,其中包含用户中心的传输。UCMEC可以确保高吞吐量和可靠的通信,以便实现任务卸载。然后,我们形ulated了一个包括任务卸载、功率控制和计算资源分配的优化问题,目的是在long-term average total delay的基础上获得最佳性能。为解决不可解决的问题,我们提出了两种分布式联合优化方案,一种基于多智能深度学习(MADRL),另一种基于几何优化。这两种方案都考虑了网络节点之间的合作和不合作。实验结果表明,UCMEC中的提议方案可以在上行传输率方面提高最多343.56%,并在long-term average total delay方面降低最多45.57%,相比传统的cellular-based MEC。
Novel KLD-based Resource Allocation for Integrated Sensing and Communication
results: 结果表明,KLD度量可以有效地优化ISAC网络,并且提出了两种优化方法,其中一种使用杜邦搜索,另一种使用RIPM。两种方法均可以提供比uniform power和天线分配更高的性能。Abstract
In this paper, we introduce a novel resource allocation approach for integrated sensing-communication (ISAC) using the Kullback-Leibler divergence (KLD) metric. Specifically, we consider a base-station with limited power and antenna resources serving a number of communication users and detecting multiple targets simultaneously. First, we analyze the KLD for two possible antenna deployments, which are the separated and shared deployments, then use the results to optimize the resources of the base-station through minimising the average KLD for the network while satisfying a minimum predefined KLD requirement for each user equipment (UE) and target. To this end, the optimisation is formulated and presented as a mixed integer nonlinear programming (MINLP) problem and then solved using two approaches. In the first approach, we employ a genetic algorithm, which offers remarkable performance but demands substantial computational resources; and in the second approach, we propose a rounding-based interior-point method (RIPM) that provides a more computationally-efficient alternative solution at a negligible performance loss. The results demonstrate that the KLD metric can be an effective means for optimising ISAC networks, and that both optimisation solutions presented offer superior performance compared to uniform power and antenna allocation.
摘要
在这篇论文中,我们提出了一种新的资源分配方法 для整合感知通信(ISAC),使用废弃-莱布尔差分(KLD)度量。具体来说,我们考虑了基站有限的功率和天线资源,服务多个通信用户并同时探测多个目标。我们首先分析了KLD两种可能的天线布置方式,分别是分开布置和共享布置,然后使用结果来优化基站的资源,以最小化网络的平均KLD值,同时满足每个用户设备(UE)和目标的最低预先定义KLD要求。为此,我们将优化问题转化为混合整数非线性 программирова(MINLP)问题,并使用两种方法解决。在第一种方法中,我们使用遗传算法,它在性能方面表现出色,但需要大量计算资源;在第二种方法中,我们提议一种归一化点方法(RIPM),它提供了更加计算效率的代替方案,但是性能损失相对较小。结果表明,KLD度量可以是ISAC网络优化的有效手段,而我们两种优化方案都提供了对均匀分配的超越性能。
Self-Critical Alternate Learning based Semantic Broadcast Communication
results: simulations results 表明,SemanticBC-SCAL 可以在低 SNR 下实现更好的性能,并且可以适应不同的BC通道,同时可以跨越传统的JSCC 框架,提供一个可靠且高效的多用户广播通信系统。Abstract
Semantic communication (SemCom) has been deemed as a promising communication paradigm to break through the bottleneck of traditional communications. Nonetheless, most of the existing works focus more on point-to-point communication scenarios and its extension to multi-user scenarios is not that straightforward due to its cost-inefficiencies to directly scale the JSCC framework to the multi-user communication system. Meanwhile, previous methods optimize the system by differentiable bit-level supervision, easily leading to a "semantic gap". Therefore, we delve into multi-user broadcast communication (BC) based on the universal transformer (UT) and propose a reinforcement learning (RL) based self-critical alternate learning (SCAL) algorithm, named SemanticBC-SCAL, to capably adapt to the different BC channels from one transmitter (TX) to multiple receivers (RXs) for sentence generation task. In particular, to enable stable optimization via a nondifferentiable semantic metric, we regard sentence similarity as a reward and formulate this learning process as an RL problem. Considering the huge decision space, we adopt a lightweight but efficient self-critical supervision to guide the learning process. Meanwhile, an alternate learning mechanism is developed to provide cost-effective learning, in which the encoder and decoders are updated asynchronously with different iterations. Notably, the incorporation of RL makes SemanticBC-SCAL compliant with any user-defined semantic similarity metric and simultaneously addresses the channel non-differentiability issue by alternate learning. Besides, the convergence of SemanticBC-SCAL is also theoretically established. Extensive simulation results have been conducted to verify the effectiveness and superiorness of our approach, especially in low SNRs.
摘要
对传统通信方式的瓶颈破裂, semantic communication(SemCom)被认为是一种有前途的通信方式。然而,现有的大多数研究都侧重于点对点通信场景,对多用户场景的扩展并不是那么直观。主要是因为直接将JSCC框架扩展到多用户通信系统会带来成本不fficiency的问题。此外,先前的方法通过可微分位元监控来优化系统,容易导致“semantic gap”。为了解决这个问题,我们将注意力集中在多用户广播通信(BC)上,使用通用变换(UT),并提出一种基于自适应学习(RL)的自我批评 alternate learning(SCAL)算法,名为SemanticBC-SCAL。这个算法可以对多个接收器(RX)的不同BC通道进行一个TX的文本生成任务。具体来说,我们视文本相似性为奖励,将学习过程定义为RL问题。由于巨大的决策空间,我们采用轻量级但高效的自我批评监控来导引学习过程。此外,我们还开发了一种代理学习机制,以便在不同迭代中各自更新编码器和解码器。 SemanticBC-SCAL的特点包括:1. 通过RL来可靠地优化,不需要微分 semantic metric。2. 可以适应任何用户定义的semantic similarity metric。3. 具有可靠的迭代学习机制,可以在低SNR下进行稳定的优化。4. 通过 alternate learning 机制,可以实现成本效益的学习。我们通过了广泛的 simulations 来证明 SemanticBC-SCAL 的效果和超越性。特别是在低SNR下,它能够具有更高的性能。
Integrating Communication, Sensing and Computing in Satellite Internet of Things: Challenges and Opportunities
results: 文章总结了当前领域的解决方案,并讨论了在卫星IoT系统中集成通信、感知和计算 функ数的主要挑战,需要进一步研究。Abstract
Satellite Internet of Things (IoT) is to use satellites as the access points for IoT devices to achieve the global coverage of future IoT systems, and is expected to support burgeoning IoT applications, including communication, sensing, and computing. However, the complex and dynamic satellite environments and limited network resources raise new challenges in the design of satellite IoT systems. In this article, we focus on the joint design of communication, sensing, and computing to improve the performance of satellite IoT, which is quite different from the case of terrestrial IoT systems. We describe how the integration of the three functions can enhance system capabilities, and summarize the state-of-the-art solutions. Furthermore, we discuss the main challenges of integrating communication, sensing, and computing in satellite IoT to be solved with pressing interest.
摘要
卫星互联网OF Things(IoT)是通过卫星作为IoT设备访问点来实现全球IoT系统的覆盖,并预计支持崛起的IoT应用程序,包括通信、感知和计算。然而,卫星环境复杂和动态,网络资源有限,这些问题对卫星IoT系统的设计带来了新的挑战。本文将关注将通信、感知和计算三个功能集成起来以提高卫星IoT性能,与地面IoT系统不同。我们将描述如何集成这三个功能可以增强系统能力,并summarize当前的解决方案。此外,我们还会讨论将通信、感知和计算集成到卫星IoT中的主要挑战,需要受到急需解决。
Joint Beam Scheduling and Power Optimization for Beam Hopping LEO Satellite Systems
results: 对比其他方法,提出的算法具有较低的时间复杂度和快速的收敛率,同时在吞吐量和公平性方面都有较好的表现。Abstract
Low earth orbit (LEO) satellite communications can provide ubiquitous and reliable services, making it an essential part of the Internet of Everything network. Beam hopping (BH) is an emerging technology for effectively addressing the issue of low resource utilization caused by the non-uniform spatio-temporal distribution of traffic demands. However, how to allocate multi-dimensional resources in a timely and efficient way for the highly dynamic LEO satellite systems remains a challenge. This paper proposes a joint beam scheduling and power optimization beam hopping (JBSPO-BH) algorithm considering the differences in the geographic distribution of sink nodes. The JBSPO-BH algorithm decouples the original problem into two sub-problems. The beam scheduling problem is modelled as a potential game, and the Nash equilibrium (NE) point is obtained as the beam scheduling strategy. Moreover, the penalty function interior point method is applied to optimize the power allocation. Simulation results show that the JBSPO-BH algorithm has low time complexity and fast convergence and achieves better performance both in throughput and fairness. Compared with greedy-based BH, greedy-based BH with the power optimization, round-robin BH, Max-SINR BH and satellite resource allocation algorithm, the throughput of the proposed algorithm is improved by 44.99%, 20.79%, 156.06%, 15.39% and 8.17%, respectively.
摘要
低地球轨道(LEO)卫星通信可以提供 ubique 和可靠的服务,使其成为互联网Everything 网络的重要组成部分。扫描跳跃(BH)是一种 emerging 技术,用于有效地解决由非均匀空间时间分布的流量需求而导致的资源利用率低的问题。然而,如何在高度动态的 LEO 卫星系统中有效地分配多维度资源,仍然是一个挑战。这篇论文提出了一种结合扫描跳跃和能量优化的 JOINT 扫描跳跃和能量优化算法(JBSPO-BH),考虑到卫星端口的地理分布差异。JBSPO-BH 算法将原始问题分解为两个子问题。扫描跳跃问题被模型为潜在游戏,并计算出扫描跳跃策略的奈氏平衡(NE)点。此外,使用 penalty function 内点法优化能量分配。实验结果表明,JBSPO-BH 算法具有低时复杂度和快速响应,并在通过put和公平性方面表现更好。与扫描跳跃、扫描跳跃加能量优化、循环扫描跳跃、最大SINR扫描跳跃和卫星资源分配算法相比,JBSPO-BH 算法的通过put提高了44.99%、20.79%、156.06%、15.39%和8.17%。
Stochastic Resource Allocation via Dual Tail Waterfilling
paper_authors: Gokberk Yaylali, Dionysios S. Kalogerias
for: This paper aims to optimize the resource allocation in wireless systems by addressing the challenges of channel fading.
methods: The paper uses a risk-aware formulation of the classical stochastic resource allocation problem, leveraging the Conditional Value-at-Risk (CV@R) as a measure of risk. The optimal risk-aware resource allocation policy and the corresponding user rate functions are derived using closed-form expressions.
results: The proposed risk-aware resource allocation policy achieves more rapid and assured convergence of dual variables compared to the primal-dual tail waterfilling algorithm. The effectiveness of the proposed scheme is confirmed through detailed numerical simulations.Abstract
Optimal resource allocation in wireless systems still stands as a rather challenging task due to the inherent statistical characteristics of channel fading. On the one hand, minimax/outage-optimal policies are often overconservative and analytically intractable, despite advertising maximally reliable system performance. On the other hand, ergodic-optimal resource allocation policies are often susceptible to the statistical dispersion of heavy-tailed fading channels, leading to relatively frequent drastic performance drops. We investigate a new risk-aware formulation of the classical stochastic resource allocation problem for point-to-point power-constrained communication networks over fading channels with no cross-interference, by leveraging the Conditional Value-at-Risk (CV@R) as a coherent measure of risk. We rigorously derive closed-form expressions for the CV@R-optimal risk-aware resource allocation policy, as well as the optimal associated quantiles of the corresponding user rate functions by capitalizing on the underlying fading distribution, parameterized by dual variables. We then develop a purely dual tail waterfilling scheme, achieving significantly more rapid and assured convergence of dual variables, as compared with the primal-dual tail waterfilling algorithm, recently proposed in the literature. The effectiveness of the proposed scheme is also readily confirmed via detailed numerical simulations.
摘要
无线系统中的资源分配仍然是一个很困难的任务,这是因为通道折叠的统计特性。一方面,最小最大/失业优化策略经常是过保守的,即使宣传最大可靠系统性能。另一方面,erdogic优化资源分配策略经常受到随机分布的巨尾折叠通道的影响,导致性能 Drop 相对较多。我们调查了一种新的风险意识形式的古典概率资源分配问题,通过利用Conditional Value-at-Risk(CV@R)作为准确的风险度量来解决。我们严格地 deriv 了 CV@R-优化的风险意识资源分配策略的关闭式表达,以及相应的用户率函数的优化关闭式表达,通过利用下降分布,参数化 dual 变量。然后,我们开发了一种纯度 dual 满水域算法,实现了较快和充分的 dual 变量的征服,相比之下,在文献中最近提出的 primal-dual 满水域算法的性能较差。我们也通过详细的数值仿真来证明提案的有效性。