eess.SP - 2023-09-28

Contrast detection is enhanced by deterministic, high-frequency transcranial alternating current stimulation with triangle and sine waveform

  • paper_url: http://arxiv.org/abs/2310.03763
  • repo_url: None
  • paper_authors: Weronika Potok, Onno van der Groen, Sahana Sivachelvam, Marc Bächinger, Flavio Fröhlich, Laszlo B. Kish, Nicole Wenderoth
  • for: 这个论文旨在探讨 Stochastic Resonance(SR)现象在神经系统中的应用, SR 是一种在非线性系统中增强信号传输的现象,可以通过添加随机噪声来实现。
  • methods: 这个论文使用了 transcranial random noise stimulation (tRNS) 和 transcranial alternating current stimulation (tACS) 两种方法来实现 SR。
  • results: 研究发现,使用 tACS 和 tRNS 可以降低视觉检测阈值,并且两种方法的效果相当。这表明,SR 可以通过添加 deterministic 信号来实现,而不仅仅是随机噪声。
    Abstract Stochastic Resonance (SR) describes a phenomenon where an additive noise (stochastic carrier-wave) enhances the signal transmission in a nonlinear system. In the nervous system, nonlinear properties are present from the level of single ion channels all the way to perception and appear to support the emergence of SR. For example, SR has been repeatedly demonstrated for visual detection tasks, also by adding noise directly to cortical areas via transcranial random noise stimulation (tRNS). When dealing with nonlinear physical systems, it has been suggested that resonance can be induced not only by adding stochastic signals (i.e., noise) but also by adding a large class of signals that are not stochastic in nature which cause "deterministic amplitude resonance" (DAR). Here we mathematically show that high-frequency, deterministic, periodic signals can yield resonance-like effects with linear transfer and infinite signal-to-noise ratio at the output. We tested this prediction empirically and investigated whether non-random, high-frequency, transcranial alternating current stimulation applied to visual cortex could induce resonance-like effects and enhance performance of a visual detection task. We demonstrated in 28 participants that applying 80 Hz triangular-waves or sine-waves with tACS reduced visual contrast detection threshold for optimal brain stimulation intensities. The influence of tACS on contrast sensitivity was equally effective to tRNS-induced modulation, demonstrating that both tACS and tRNS can reduce contrast detection thresholds. Our findings suggest that a resonance-like mechanism can also emerge when deterministic electrical waveforms are applied via tACS.
    摘要

T1/T2 relaxation temporal modelling from accelerated acquisitions using a Latent Transformer

  • paper_url: http://arxiv.org/abs/2309.16853
  • repo_url: None
  • paper_authors: Fanwen Wang, Michael Tanzer, Mengyun Qiao, Wenjia Bai, Daniel Rueckert, Guang Yang, Sonia Nielles-Vallespin
  • for: 这个论文旨在提高心肺成像中的速度和精度,使得它们能够广泛应用于临床。
  • methods: 该论文使用深度学习方法,特别是Latent Transformer模块,来模型 Parameterized time frames 之间的关系,从而提高从受限样本数据中的重建。
  • results: 论文中的结果表明,通过Explicitly incorporating time dynamics,模型可以recover higher fidelity T1和T2 mapping,并且不受artefacts的干扰。这个研究证明了在量子MRI中的时间模型非常重要。
    Abstract Quantitative cardiac magnetic resonance T1 and T2 mapping enable myocardial tissue characterisation but the lengthy scan times restrict their widespread clinical application. We propose a deep learning method that incorporates a time dependency Latent Transformer module to model relationships between parameterised time frames for improved reconstruction from undersampled data. The module, implemented as a multi-resolution sequence-to-sequence transformer, is integrated into an encoder-decoder architecture to leverage the inherent temporal correlations in relaxation processes. The presented results for accelerated T1 and T2 mapping show the model recovers maps with higher fidelity by explicit incorporation of time dynamics. This work demonstrates the importance of temporal modelling for artifact-free reconstruction in quantitative MRI.
    摘要 量化冠动磁共振T1和T2映射可以 caracterizar la tissue cardíaca, pero los tiempos de escaneo prolongados limitan su aplicación clínica amplia. Proponemos un método de aprendizaje profundo que incorpora un módulo de dependencia temporal Latent Transformer para modelar las relaciones entre los marcos de tiempo parameterizados para mejorar la reconstrucción a partir de datos sub-procesados. El módulo, implementado como un transformer de secuencia a secuencia de múltiples resoluciones, se integra en un arquitectura de codificador-decodificador para aprovechar las correlaciones temporales inherentes en los procesos de relajación. Los resultados presentados para la aceleración de T1 y T2 mapping muestran que el modelo recupera mapas con una fidelidad más alta al incorporar explícitamente las dinámicas del tiempo. Este trabajo demuestra la importancia del modelado temporal para la reconstrucción libre de artifactos en la imagen de resonancia magnética cuántica.

Business Model Canvas for Micro Operators in 5G Coopetitive Ecosystem

  • paper_url: http://arxiv.org/abs/2309.16845
  • repo_url: None
  • paper_authors: Javane Rostampoor, Roghayeh Joda, Mohammad Dindoost
  • for: 本研究旨在提供5G微型运营商业模式框架,以帮助新的5G业务创造价值。
  • methods: 本研究采用了商业模式canvas(BMC)的概念,以分析5G微型运营商业模式的发展。
  • results: 研究发现,5G微型运营商业模式框架可以帮助新的5G业务创造价值,并且可以在5G协同环境中实现更好的覆盖率和容量。
    Abstract In order to address the need for more capacity and coverage in the 5th generation (5G) of wireless networks, ultra-dense wireless networks are introduced which mainly consist of indoor small cells. This new architecture has paved the way for the advent of a new concept called Micro Operator. A micro operator is an entity that provides connections and local 5G services to the customers and relies on local frequency resources. We discuss business models of micro operators in a 5G coopetitive environment and develop a framework to indicate the business model canvas (BMC) of this new concept. Providing BMC for new businesses is a strategic approach to offer value to customers. In this research study, BMC and its elements are introduced and explained for 5G micro operators.
    摘要 为了满足5G网络的容量和覆盖需求,ultra-dense无线网络被引入,主要由室内小终端组成。这新的架构为微运营者的出现提供了方便。微运营者是一个为客户提供连接和本地5G服务的实体,并且依靠本地频率资源。我们研究了5G协作环境中微运营者的业务模式,并开发了一个框架来指示微运营者的业务模型Canvas(BMC)。为新的业务提供BMC是一种策略性的方法,以便为客户提供价值。在这项研究中,BMC和其元素被介绍和解释了5G微运营者。

Wi-Fi 8: Embracing the Millimeter-Wave Era

  • paper_url: http://arxiv.org/abs/2309.16813
  • repo_url: None
  • paper_authors: Xiaoqian Liu, Tingwei Chen, Yuhan Dong, Zhi Mao, Ming Gan, Xun Yang, Jianmin Lu
  • for: 这篇论文探讨了未来的Wi-Fi 8技术,尤其是兆米波技术的应用。
  • methods: 该论文通过 simulations 提供了一个全面的未来Wi-Fi 8技术的视角,并且研究了兆米波技术的可能性。
  • results: 模拟结果表明,兆米波技术可以实现显著的性能提升,即使硬件障碍存在。
    Abstract With the increasing demands in communication, Wi-Fi technology is advancing towards its next generation. Building on the foundation of Wi-Fi 7, millimeter-wave technology is anticipated to converge with Wi-Fi 8 in the near future. In this paper, we look into the millimeter-wave technology and other potential feasible features, providing a comprehensive perspective on the future of Wi-Fi 8. Our simulation results demonstrate that significant performance gains can be achieved, even in the presence of hardware impairments.
    摘要 随着通信需求的增长,Wi-Fi技术正在迈向下一代。基于Wi-Fi 7的基础上, millimeter-wave技术预计将与Wi-Fi 8相结合在不远的未来。本文将 millimeter-wave技术和其他可能实现的特性进行全面探讨,为Wi-Fi 8的未来提供全面的视角。我们的 simulations 结果表明,即使硬件障碍存在,也可以实现显著的性能提升。

  • paper_url: http://arxiv.org/abs/2309.16628
  • repo_url: None
  • paper_authors: Charles E. Thornton, Evan Allen, Evar Jones, Daniel Jakubisin, Fred Templin, Lingjia Liu
  • for: 本文研究了5G和以后的副链(SL)通信可以支持多跳策略网络。
  • methods: 本文首先提供了3GPP SL标准化活动的技术和历史概述,然后考虑了在战略网络中的应用问题。文章考虑了许多多跳路由技术,这些技术预期会对SL启用多跳策略网络中很有用。文章还考虑了开源工具,可以用于网络模拟。
  • results: 本文讨论了5G SL启用多跳策略网络中的一些问题,如RLS感知和定位的 инте格ция,以及新的机器学习工具,如联邦学习和分布式学习,可以用于资源分配和路由问题。文章 conclude by summarizing recent developments in the 5G SL literature and provide guidelines for future research。
    Abstract This work investigates the potential of 5G and beyond sidelink (SL) communication to support multi-hop tactical networks. We first provide a technical and historical overview of 3GPP SL standardization activities, and then consider applications to current problems of interest in tactical networking. We consider a number of multi-hop routing techniques which are expected to be of interest for SL-enabled multi-hop tactical networking and examine open-source tools useful for network emulation. Finally, we discuss relevant research directions which may be of interest for 5G SL-enabled tactical communications, namely the integration of RF sensing and positioning, as well as emerging machine learning tools such as federated and decentralized learning, which may be of great interest for resource allocation and routing problems that arise in tactical applications. We conclude by summarizing recent developments in the 5G SL literature and provide guidelines for future research.
    摘要 这项研究探讨了5G和以后宽带侧链(SL)通信的潜力来支持多跳策略网络。我们首先提供了技术和历史概述3GPP SL标准化活动,然后考虑了应用于战斗网络中的现有问题。我们考虑了一些多跳路由技术,这些技术预计将对SL启用多跳战斗网络中具有 интерес。我们还考虑了开源工具,可以用于网络模拟。最后,我们讨论了5G SL启用的相关研究方向,包括 integrate RF探测和定位,以及emerging machine learning工具,如联邦和分布式学习,这些工具可能对战斗应用中的资源分配和路由问题具有很大的意义。我们结束于summarizing recent developments in 5G SL literature and provide guidelines for future research。Note: Please note that the translation is in Simplified Chinese, which is the standard form of Chinese used in mainland China and Singapore. If you need the translation in Traditional Chinese, please let me know.

HyperLISTA-ABT: An Ultra-light Unfolded Network for Accurate Multi-component Differential Tomographic SAR Inversion

  • paper_url: http://arxiv.org/abs/2309.16468
  • repo_url: None
  • paper_authors: Kun Qian, Yuanyuan Wang, Peter Jung, Yilei Shi, Xiao Xiang Zhu
  • for: 提高深度学习基于迭代算法的四维影像重建(4D)精度和效率。
  • methods: 提出了一种高效精度的HyperLISTA-ABT算法,使用分析方式确定网络参数,并实现了 Adaptive Blockwise Thresholding 技术,以提高全面阈值处理。
  • results: 通过实验和实际数据测试,显示HyperLISTA-ABT可以在有限的计算资源和时间下获得高质量的4D点云重建。
    Abstract Deep neural networks based on unrolled iterative algorithms have achieved remarkable success in sparse reconstruction applications, such as synthetic aperture radar (SAR) tomographic inversion (TomoSAR). However, the currently available deep learning-based TomoSAR algorithms are limited to three-dimensional (3D) reconstruction. The extension of deep learning-based algorithms to four-dimensional (4D) imaging, i.e., differential TomoSAR (D-TomoSAR) applications, is impeded mainly due to the high-dimensional weight matrices required by the network designed for D-TomoSAR inversion, which typically contain millions of freely trainable parameters. Learning such huge number of weights requires an enormous number of training samples, resulting in a large memory burden and excessive time consumption. To tackle this issue, we propose an efficient and accurate algorithm called HyperLISTA-ABT. The weights in HyperLISTA-ABT are determined in an analytical way according to a minimum coherence criterion, trimming the model down to an ultra-light one with only three hyperparameters. Additionally, HyperLISTA-ABT improves the global thresholding by utilizing an adaptive blockwise thresholding scheme, which applies block-coordinate techniques and conducts thresholding in local blocks, so that weak expressions and local features can be retained in the shrinkage step layer by layer. Simulations were performed and demonstrated the effectiveness of our approach, showing that HyperLISTA-ABT achieves superior computational efficiency and with no significant performance degradation compared to state-of-the-art methods. Real data experiments showed that a high-quality 4D point cloud could be reconstructed over a large area by the proposed HyperLISTA-ABT with affordable computational resources and in a fast time.
    摘要 深度神经网络基于迭代算法已经在稀疏重建应用中获得了惊人的成功,如Synthetic Aperture Radar(SAR)tomographic逆转(TomoSAR)。然而,目前可用的深度学习基于算法只能处理三维(3D)重建。将深度学习基于算法扩展到四维(4D)成像,即差分Tomography(D-TomoSAR)应用,受限于高维度权重矩阵需要的深度学习模型中的大量自由调节参数。学习这么多参数需要极大的训练样本数和巨大的内存压力,导致训练时间过长。为解决这个问题,我们提出了一种高效和准确的算法called HyperLISTA-ABT。HyperLISTA-ABT中的权重由分析方式决定,以最小干扰 criterion 来确定,因此模型的参数减少到了 ultra-light 的三个超参数。此外,HyperLISTA-ABT还改进了全球阈值处理,通过使用adaptive blockwise阈值处理方案,在本地块中进行阈值处理,以保留弱表达和本地特征在压缩步骤中。我们的方法通过实验表明,HyperLISTA-ABT可以实现高效的计算和快速的训练,而无需极大的训练样本数和内存压力。真实数据实验也表明,通过我们的方法可以在大面积的4D点云重建中获得高质量的重建结果,并且可以在有限的计算资源和快速的时间内完成。

Feed-forward and recurrent inhibition for compressing and classifying high dynamic range biosignals in spiking neural network architectures

  • paper_url: http://arxiv.org/abs/2309.16425
  • repo_url: None
  • paper_authors: Rachel Sava, Elisa Donati, Giacomo Indiveri
  • for: This paper aims to address the challenge of compressing high-dynamic range biosignals in spiking neural network (SNN) architectures.
  • methods: The authors propose a biologically-inspired strategy that utilizes three adaptation mechanisms found in the brain: spike-frequency adaptation, feed-forward inhibitory connections, and Excitatory-Inhibitory (E-I) balance.
  • results: The authors validate the approach in silico using a simple network applied to a gesture classification task from surface EMG recordings.
    Abstract Neuromorphic processors that implement Spiking Neural Networks (SNNs) using mixed-signal analog/digital circuits represent a promising technology for closed-loop real-time processing of biosignals. As in biology, to minimize power consumption, the silicon neurons' circuits are configured to fire with a limited dynamic range and with maximum firing rates restricted to a few tens or hundreds of Herz. However, biosignals can have a very large dynamic range, so encoding them into spikes without saturating the neuron outputs represents an open challenge. In this work, we present a biologically-inspired strategy for compressing this high-dynamic range in SNN architectures, using three adaptation mechanisms ubiquitous in the brain: spike-frequency adaptation at the single neuron level, feed-forward inhibitory connections from neurons belonging to the input layer, and Excitatory-Inhibitory (E-I) balance via recurrent inhibition among neurons in the output layer. We apply this strategy to input biosignals encoded using both an asynchronous delta modulation method and an energy-based pulse-frequency modulation method. We validate this approach in silico, simulating a simple network applied to a gesture classification task from surface EMG recordings.
    摘要 神经omorphic处理器实现基于异步 delta 模ulation和能量基本的脉冲频率调制的脑神经网络(SNN),通过混合 analog/digital 电路实现closed-loop实时处理生物信号。在生物体内,为了减少能耗,silicon neuron circuit 配置为在有限的动态范围内发射,最大发射频率限制在一些百或上百 Herz 内。但生物信号可以有非常大的动态范围,因此将它们编码成脉冲无需满足 neuron 输出的限制是一个开放的挑战。在这种工作中,我们提出了基于生物体内的三种适应机制来压缩高动态范围的 SNN 建筑,包括单个 neuron 层的脉冲频率适应、输入层的前向抑制连接和输出层的律动抑制。我们将这些机制应用到输入的生物信号,使用异步 delta 模ulation 和能量基本的脉冲频率调制方法来编码。我们在 silico 中验证了这种方法,对一个简单的网络进行了surface EMG 记录的手势识别任务。

A Universal Framework for Holographic MIMO Sensing

  • paper_url: http://arxiv.org/abs/2309.16389
  • repo_url: None
  • paper_authors: Charles Vanwynsberghe, Jiguang He, Mérouane Debbah
  • for: 这篇论文旨在解决具有不规则形状的连续天线感知空间的问题。
  • methods: 该论文提出了一种通用框架,可以无论天线的形状,准确地确定天线的感知空间。这种方法基于采样场的几何分析,并且可以在空间和频率域上彰显sampled场的特性。
  • results: 实验结果表明,该方法可以准确地估算不同形状天线的度量域,并且可以扩展到真实的具有折叠性的天线。
    Abstract This paper addresses the sensing space identification of arbitrarily shaped continuous antennas. In the context of holographic multiple-input multiple-output (MIMO), a.k.a. large intelligent surfaces, these antennas offer benefits such as super-directivity and near-field operability. The sensing space reveals two key aspects: (a) its dimension specifies the maximally achievable spatial degrees of freedom (DoFs), and (b) the finite basis spanning this space accurately describes the sampled field. Earlier studies focus on specific geometries, bringing forth the need for extendable analysis to real-world conformal antennas. Thus, we introduce a universal framework to determine the antenna sensing space, regardless of its shape. The findings underscore both spatial and spectral concentration of sampled fields to define a generic eigenvalue problem of Slepian concentration. Results show that this approach precisely estimates the DoFs of well-known geometries, and verify its flexible extension to conformal antennas.
    摘要
  1. The dimension of the sensing space specifies the maximum achievable spatial degrees of freedom (DoFs).2. The finite basis spanning the sensing space accurately describes the sampled field.Previous studies have focused on specific geometries, highlighting the need for a more extendable analysis that can be applied to real-world conformal antennas. To address this, the paper introduces a universal framework for determining the antenna sensing space, regardless of its shape.The results demonstrate both spatial and spectral concentration of sampled fields, which can be used to define a generic eigenvalue problem of Slepian concentration. The approach precisely estimates the DoFs of well-known geometries and verifies its flexibility in extending to conformal antennas.

Convex Estimation of Sparse-Smooth Power Spectral Densities from Mixtures of Realizations with Application to Weather Radar

  • paper_url: http://arxiv.org/abs/2309.16215
  • repo_url: None
  • paper_authors: Hiroki Kuroda, Daichi Kitahara, Eiichi Yoshikawa, Hiroshi Kikuchi, Tomoo Ushio
  • for: 估计复杂 random 过程中 sparse 和 smooth power spectral densities (PSDs)
  • methods: 使用 convex optimization 估计 PSDs
  • results: 提高估计精度 compared to 现有 sparse estimation models
    Abstract In this paper, we propose a convex optimization-based estimation of sparse and smooth power spectral densities (PSDs) of complex-valued random processes from mixtures of realizations. While the PSDs are related to the magnitude of the frequency components of the realizations, it has been a major challenge to exploit the smoothness of the PSDs because penalizing the difference of the magnitude of the frequency components results in a nonconvex optimization problem that is difficult to solve. To address this challenge, we design the proposed model that jointly estimates the complex-valued frequency components and the nonnegative PSDs, which are respectively regularized to be sparse and sparse-smooth. By penalizing the difference of the nonnegative variable that estimates the PSDs, the proposed model can enhance the smoothness of the PSDs via convex optimization. Numerical experiments on the phased array weather radar, an advanced weather radar system, demonstrate that the proposed model achieves superior estimation accuracy compared to existing sparse estimation models, regardless of whether they are combined with a smoothing technique as a post-processing step or not.
    摘要 在本文中,我们提出了一种基于凸优化的复杂数据频谱密度(PSD)估计方法,用于识别复杂随机过程中的稀疏和平滑频谱密度。而频谱密度与实现的频率成分的大小有关,但是由于惩罚频谱密度的差异会导致非凸优化问题,这使得估计变得困难。为解决这个挑战,我们设计了提案的模型,它同时估计了复杂的频率成分和非负的频谱密度,并将它们分别正则化为稀疏和稀疏平滑。通过惩罚非负变量,该模型可以通过凸优化提高频谱密度的平滑性。在phasered array weather radar中进行的数值实验表明,提案的模型在存在或不存在融合熵降低技术的情况下都可以 дости到现有稀疏估计模型的高精度估计。

Hybrid Digital-Wave Domain Channel Estimator for Stacked Intelligent Metasurface Enabled Multi-User MISO Systems

  • paper_url: http://arxiv.org/abs/2309.16204
  • repo_url: None
  • paper_authors: Qurrat-Ul-Ain Nadeem, Jiancheng An, Anas Chaaban
  • for: 这个论文主要是为了解决堆叠智能元素(SIM)激发的通信系统中的通道估计(CE)问题。
  • methods: 该论文提出了一种新的混合数字波域频率域通道估计方法,其中收到的训练符号首先在SIM层中进行了波域处理,然后在数字域中进行了加工。
  • results: 该方法可以在具有限制数量的 радио频率(RF)链的SIM激发通信系统中实现高精度的通道估计,并且可以降低训练负担。
    Abstract Stacked intelligent metasurface (SIM) is an emerging programmable metasurface architecture that can implement signal processing directly in the electromagnetic wave domain, thereby enabling efficient implementation of ultra-massive multiple-input multiple-output (MIMO) transceivers with a limited number of radio frequency (RF) chains. Channel estimation (CE) is challenging for SIM-enabled communication systems due to the multi-layer architecture of SIM, and because we need to estimate large dimensional channels between the SIM and users with a limited number of RF chains. To efficiently solve this problem, we develop a novel hybrid digital-wave domain channel estimator, in which the received training symbols are first processed in the wave domain within the SIM layers, and then processed in the digital domain. The wave domain channel estimator, parametrized by the phase shifts applied by the meta-atoms in all layers, is optimized to minimize the mean squared error (MSE) using a gradient descent algorithm, within which the digital part is optimally updated. For an SIM-enabled multi-user system equipped with 4 RF chains and a 6-layer SIM with 64 meta-atoms each, the proposed estimator yields an MSE that is very close to that achieved by fully digital CE in a massive MIMO system employing 64 RF chains. This high CE accuracy is achieved at the cost of a training overhead that can be reduced by exploiting the potential low rank of channel correlation matrices.
    摘要 堆叠智能表面(SIM)是一种emerging的可编程表面建筑,可以直接在电磁波频率频谱中实现信号处理,从而实现高效的多输入多出力(MIMO)接收机器系统的实现,只需要有限的 радио频率(RF)链。但是,频率链的数量不够,使得频率链数量的限制会导致通道估计(CE)变得困难。为解决这个问题,我们开发了一种新的混合式数字波域频率域通道估计器,其中接收训练符号被首先处理在SIM层中的波域内,然后在数字域内进行处理。波域频率域估计器,由SIM层中所有元atom的阶梯shift参数化,使其最小化均方误差(MSE),并使用梯度下降算法优化。对于装备4个RF链和6层SIM的多用户系统,我们的估计器可以与完全数字CE在巨量MIMO系统使用64个RF链的MSE准确。这高度的CE准确性是在训练负担的代价下实现的,并且可以通过利用频率征的低级别相关性来减少训练负担。

Adaptive Real-Time Numerical Differentiation with Variable-Rate Forgetting and Exponential Resetting

  • paper_url: http://arxiv.org/abs/2309.16159
  • repo_url: None
  • paper_authors: Shashank Verma, Brian Lai, Dennis S. Bernstein
  • for: 这个论文旨在解决随时间变化的感知器噪声的问题,提出了基于adaptive实时数值 differentiating和可变速率忘却的AISE方法。
  • methods: 该论文使用了adaptive实时数值 differentiating和可变速率忘却的AISE方法来解决随时间变化的感知器噪声问题。
  • results: 该论文的实验结果表明,基于AISE方法的适应式实时数值 differentiating可以更好地适应随时间变化的感知器噪声,并且可以更快地响应 changing 噪声特性。
    Abstract Digital PID control requires a differencing operation to implement the D gain. In order to suppress the effects of noisy data, the traditional approach is to filter the data, where the frequency response of the filter is adjusted manually based on the characteristics of the sensor noise. The present paper considers the case where the characteristics of the sensor noise change over time in an unknown way. This problem is addressed by applying adaptive real-time numerical differentiation based on adaptive input and state estimation (AISE). The contribution of this paper is to extend AISE to include variable-rate forgetting with exponential resetting, which allows AISE to more rapidly respond to changing noise characteristics while enforcing the boundedness of the covariance matrix used in recursive least squares.
    摘要 数字PID控制需要 diferencing 操作实现 D 增益。为了降低噪声数据的影响,传统方法是使用滤波器处理数据,其滤波器频率响应需要手动调整基于传感器噪声特性。本文考虑了情况下噪声特性随时间变化的情况,这个问题通过实时数字梯度计算和状态估计(AISE)进行解决。本文的贡献在于将 AISE 扩展到包括变化率忘记和加速忘记,使 AISE 能更快地响应变化噪声特性,同时保证使用 recursive least squares 中的 covariance 矩阵的 boundedness。