23 results on '"Chopra, Nikhil"'
Search Results
2. On Convergence of the Iteratively Preconditioned Gradient-Descent (IPG) Observer
- Author
-
Chakrabarti, Kushal and Chopra, Nikhil
- Subjects
Mathematics - Optimization and Control ,Electrical Engineering and Systems Science - Systems and Control - Abstract
This paper considers the observer design problem for discrete-time nonlinear dynamical systems with sampled measurement data. Earlier, the recently proposed Iteratively Preconditioned Gradient-Descent (IPG) observer, a Newton-type observer, has been empirically shown to have improved robustness against measurement noise than the prominent nonlinear observers, a property that other Newton-type observers lack. However, no theoretical guarantees on the convergence of the IPG observer were provided. This paper presents a rigorous convergence analysis of the IPG observer for a class of nonlinear systems in deterministic settings, proving its local linear convergence to the actual trajectory. Our assumptions are standard in the existing literature of Newton-type observers, and the analysis further confirms the relation of the IPG observer with the Newton observer, which was only hypothesized earlier., Comment: 7 pages
- Published
- 2024
- Full Text
- View/download PDF
3. Reinforcement Learning Driven Cooperative Ball Balance in Rigidly Coupled Drones
- Author
-
Barawkar, Shraddha and Chopra, Nikhil
- Subjects
Computer Science - Robotics - Abstract
Multi-drone cooperative transport (CT) problem has been widely studied in the literature. However, limited work exists on control of such systems in the presence of time-varying uncertainties, such as the time-varying center of gravity (CG). This paper presents a leader-follower approach for the control of a multi-drone CT system with time-varying CG. The leader uses a traditional Proportional-Integral-Derivative (PID) controller, and in contrast, the follower uses a deep reinforcement learning (RL) controller using only local information and minimal leader information. Extensive simulation results are presented, showing the effectiveness of the proposed method over a previously developed adaptive controller and for variations in the mass of the objects being transported and CG speeds. Preliminary experimental work also demonstrates ball balance (depicting moving CG) on a stick/rod lifted by two Crazyflie drones cooperatively.
- Published
- 2024
4. Quantum Circuit Optimization through Iteratively Pre-Conditioned Gradient Descent
- Author
-
Srinivasan, Dhruv, Chakrabarti, Kushal, Chopra, Nikhil, and Dutt, Avik
- Subjects
Quantum Physics - Abstract
For typical quantum subroutines in the gate-based model of quantum computing, explicit decompositions of circuits in terms of single-qubit and two-qubit entangling gates may exist. However, they often lead to large-depth circuits that are challenging for noisy intermediate-scale quantum (NISQ) hardware. Additionally, exact decompositions might only exist for some modular quantum circuits. Therefore, it is essential to find gate combinations that approximate these circuits to high fidelity with potentially low depth, for example, using gradient-based optimization. Traditional optimizers often run into problems of slow convergence requiring many iterations, and perform poorly in the presence of noise. Here we present iteratively preconditioned gradient descent (IPG) for optimizing quantum circuits and demonstrate performance speedups for state preparation and implementation of quantum algorithmic subroutines. IPG is a noise-resilient, higher-order algorithm that has shown promising gains in convergence speed for classical optimizations, converging locally at a linear rate for convex problems and superlinearly when the solution is unique. Specifically, we show an improvement in fidelity by a factor of $10^4$ for preparing a 4-qubit W state and a maximally entangled 5-qubit GHZ state compared to other commonly used classical optimizers tuning the same ansatz. We also show gains for optimizing a unitary for a quantum Fourier transform using IPG, and report results of running such optimized circuits on IonQ's quantum processing unit (QPU). Such faster convergence with promise for noise-resilience could provide advantages for quantum algorithms on NISQ hardware, especially since the cost of running each iteration on a quantum computer is substantially higher than the classical optimizer step., Comment: Part of this work was accepted and presented at IEEE QCE23 in the Quantum Applications track
- Published
- 2023
- Full Text
- View/download PDF
5. UIVNAV: Underwater Information-driven Vision-based Navigation via Imitation Learning
- Author
-
Lin, Xiaomin, Karapetyan, Nare, Joshi, Kaustubh, Liu, Tianchen, Chopra, Nikhil, Yu, Miao, Tokekar, Pratap, and Aloimonos, Yiannis
- Subjects
Computer Science - Robotics - Abstract
Autonomous navigation in the underwater environment is challenging due to limited visibility, dynamic changes, and the lack of a cost-efficient accurate localization system. We introduce UIVNav, a novel end-to-end underwater navigation solution designed to drive robots over Objects of Interest (OOI) while avoiding obstacles, without relying on localization. UIVNav uses imitation learning and is inspired by the navigation strategies used by human divers who do not rely on localization. UIVNav consists of the following phases: (1) generating an intermediate representation (IR), and (2) training the navigation policy based on human-labeled IR. By training the navigation policy on IR instead of raw data, the second phase is domain-invariant -- the navigation policy does not need to be retrained if the domain or the OOI changes. We show this by deploying the same navigation policy for surveying two different OOIs, oyster and rock reefs, in two different domains, simulation, and a real pool. We compared our method with complete coverage and random walk methods which showed that our method is more efficient in gathering information for OOIs while also avoiding obstacles. The results show that UIVNav chooses to visit the areas with larger area sizes of oysters or rocks with no prior information about the environment or localization. Moreover, a robot using UIVNav compared to complete coverage method surveys on average 36% more oysters when traveling the same distances. We also demonstrate the feasibility of real-time deployment of UIVNavin pool experiments with BlueROV underwater robot for surveying a bed of oyster shells.
- Published
- 2023
6. Iteratively Preconditioned Gradient-Descent Approach for Moving Horizon Estimation Problems
- Author
-
Liu, Tianchen, Chakrabarti, Kushal, and Chopra, Nikhil
- Subjects
Mathematics - Optimization and Control ,Computer Science - Robotics ,Electrical Engineering and Systems Science - Systems and Control - Abstract
Moving horizon estimation (MHE) is a widely studied state estimation approach in several practical applications. In the MHE problem, the state estimates are obtained via the solution of an approximated nonlinear optimization problem. However, this optimization step is known to be computationally complex. Given this limitation, this paper investigates the idea of iteratively preconditioned gradient-descent (IPG) to solve MHE problem with the aim of an improved performance than the existing solution techniques. To our knowledge, the preconditioning technique is used for the first time in this paper to reduce the computational cost and accelerate the crucial optimization step for MHE. The convergence guarantee of the proposed iterative approach for a class of MHE problems is presented. Additionally, sufficient conditions for the MHE problem to be convex are also derived. Finally, the proposed method is implemented on a unicycle localization example. The simulation results demonstrate that the proposed approach can achieve better accuracy with reduced computational costs.
- Published
- 2023
7. Cartographer_glass: 2D Graph SLAM Framework using LiDAR for Glass Environments
- Author
-
Weerakoon, Lasitha, Herr, Gurtajbir Singh, Blunt, Jasmine, Yu, Miao, and Chopra, Nikhil
- Subjects
Computer Science - Robotics - Abstract
We study algorithms for detecting and including glass objects in an optimization-based Simultaneous Localization and Mapping (SLAM) algorithm in this work. When LiDAR data is the primary exteroceptive sensory input, glass objects are not correctly registered. This occurs as the incident light primarily passes through the glass objects or reflects away from the source, resulting in inaccurate range measurements for glass surfaces. Consequently, the localization and mapping performance is impacted, thereby rendering navigation in such environments unreliable. Optimization-based SLAM solutions, which are also referred to as Graph SLAM, are widely regarded as state of the art. In this paper, we utilize a simple and computationally inexpensive glass detection scheme for detecting glass objects and present the methodology to incorporate the identified objects into the occupancy grid maintained by such an algorithm (Google Cartographer). We develop both local (submap level) and global algorithms for achieving the objective mentioned above and compare the maps produced by our method with those produced by an existing algorithm that utilizes particle filter based SLAM.
- Published
- 2022
8. A Control Theoretic Framework for Adaptive Gradient Optimizers in Machine Learning
- Author
-
Chakrabarti, Kushal and Chopra, Nikhil
- Subjects
Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Systems and Control ,Mathematics - Optimization and Control ,Statistics - Machine Learning - Abstract
Adaptive gradient methods have become popular in optimizing deep neural networks; recent examples include AdaGrad and Adam. Although Adam usually converges faster, variations of Adam, for instance, the AdaBelief algorithm, have been proposed to enhance Adam's poor generalization ability compared to the classical stochastic gradient method. This paper develops a generic framework for adaptive gradient methods that solve non-convex optimization problems. We first model the adaptive gradient methods in a state-space framework, which allows us to present simpler convergence proofs of adaptive optimizers such as AdaGrad, Adam, and AdaBelief. We then utilize the transfer function paradigm from classical control theory to propose a new variant of Adam, coined AdamSSM. We add an appropriate pole-zero pair in the transfer function from squared gradients to the second moment estimate. We prove the convergence of the proposed AdamSSM algorithm. Applications on benchmark machine learning tasks of image classification using CNN architectures and language modeling using LSTM architecture demonstrate that the AdamSSM algorithm improves the gap between generalization accuracy and faster convergence than the recent adaptive gradient methods.
- Published
- 2022
- Full Text
- View/download PDF
9. Co-Design of Lipschitz Nonlinear Systems
- Author
-
Chanekar, Prasad Vilas and Chopra, Nikhil
- Subjects
Electrical Engineering and Systems Science - Systems and Control - Abstract
Empirical experiences have shown that simultaneous (rather than conventional sequential) plant and controller design procedure leads to an improvement in performance and saving of plant resources. Such a simultaneous synthesis procedure is called as "co-design". In this letter we study the co-design problem for a class of Lipschitz nonlinear dynamical systems having a quadratic control objective and state-feedback controller. We propose a novel time independent reformulation of the co-design optimization problem whose constraints ensure stability of the system. We also present a gradient-based co-design solution procedure which involves system coordinate transformation and whose output is provably stable solution for the original system. We show the efficacy of the solution procedure through co-design of a single-link robot.
- Published
- 2022
10. On Accelerating Distributed Convex Optimizations
- Author
-
Chakrabarti, Kushal, Gupta, Nirupam, and Chopra, Nikhil
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Systems and Control ,Statistics - Machine Learning - Abstract
This paper studies a distributed multi-agent convex optimization problem. The system comprises multiple agents in this problem, each with a set of local data points and an associated local cost function. The agents are connected to a server, and there is no inter-agent communication. The agents' goal is to learn a parameter vector that optimizes the aggregate of their local costs without revealing their local data points. In principle, the agents can solve this problem by collaborating with the server using the traditional distributed gradient-descent method. However, when the aggregate cost is ill-conditioned, the gradient-descent method (i) requires a large number of iterations to converge, and (ii) is highly unstable against process noise. We propose an iterative pre-conditioning technique to mitigate the deleterious effects of the cost function's conditioning on the convergence rate of distributed gradient-descent. Unlike the conventional pre-conditioning techniques, the pre-conditioner matrix in our proposed technique updates iteratively to facilitate implementation on the distributed network. In the distributed setting, we provably show that the proposed algorithm converges linearly with an improved rate of convergence than the traditional and adaptive gradient-descent methods. Additionally, for the special case when the minimizer of the aggregate cost is unique, our algorithm converges superlinearly. We demonstrate our algorithm's superior performance compared to prominent distributed algorithms for solving real logistic regression problems and emulating neural network training via a noisy quadratic model, thereby signifying the proposed algorithm's efficiency for distributively solving non-convex optimization. Moreover, we empirically show that the proposed algorithm results in faster training without compromising the generalization performance.
- Published
- 2021
11. Generalized AdaGrad (G-AdaGrad) and Adam: A State-Space Perspective
- Author
-
Chakrabarti, Kushal and Chopra, Nikhil
- Subjects
Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Systems and Control ,Statistics - Machine Learning - Abstract
Accelerated gradient-based methods are being extensively used for solving non-convex machine learning problems, especially when the data points are abundant or the available data is distributed across several agents. Two of the prominent accelerated gradient algorithms are AdaGrad and Adam. AdaGrad is the simplest accelerated gradient method, which is particularly effective for sparse data. Adam has been shown to perform favorably in deep learning problems compared to other methods. In this paper, we propose a new fast optimizer, Generalized AdaGrad (G-AdaGrad), for accelerating the solution of potentially non-convex machine learning problems. Specifically, we adopt a state-space perspective for analyzing the convergence of gradient acceleration algorithms, namely G-AdaGrad and Adam, in machine learning. Our proposed state-space models are governed by ordinary differential equations. We present simple convergence proofs of these two algorithms in the deterministic settings with minimal assumptions. Our analysis also provides intuition behind improving upon AdaGrad's convergence rate. We provide empirical results on MNIST dataset to reinforce our claims on the convergence and performance of G-AdaGrad and Adam., Comment: Updates: The parameter condition of Adam in Theorem 2 has been relaxed and the proof has been updated accordingly. Experimental results on logistic regression model have been included. Conference: Accepted for presentation in the 2021 60th IEEE Conference on Decision and Control (CDC)
- Published
- 2021
12. Robustness of Iteratively Pre-Conditioned Gradient-Descent Method: The Case of Distributed Linear Regression Problem
- Author
-
Chakrabarti, Kushal, Gupta, Nirupam, and Chopra, Nikhil
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Systems and Control ,Statistics - Machine Learning - Abstract
This paper considers the problem of multi-agent distributed linear regression in the presence of system noises. In this problem, the system comprises multiple agents wherein each agent locally observes a set of data points, and the agents' goal is to compute a linear model that best fits the collective data points observed by all the agents. We consider a server-based distributed architecture where the agents interact with a common server to solve the problem; however, the server cannot access the agents' data points. We consider a practical scenario wherein the system either has observation noise, i.e., the data points observed by the agents are corrupted, or has process noise, i.e., the computations performed by the server and the agents are corrupted. In noise-free systems, the recently proposed distributed linear regression algorithm, named the Iteratively Pre-conditioned Gradient-descent (IPG) method, has been claimed to converge faster than related methods. In this paper, we study the robustness of the IPG method, against both the observation noise and the process noise. We empirically show that the robustness of the IPG method compares favorably to the state-of-the-art algorithms., Comment: in IEEE Control Systems Letters. Related articles: arXiv:2003.07180v2 [math.OC], arXiv:2008.02856v1 [math.OC], and arXiv:2011.07595v2 [math.OC]
- Published
- 2021
- Full Text
- View/download PDF
13. Adaptive Tracking Control of Soft Robots using Integrated Sensing Skin and Recurrent Neural Networks
- Author
-
Weerakoon, Lasitha, Ye, Zepeng, Bama, Rahul Subramonian, Smela, Elisabeth, Yu, Miao, and Chopra, Nikhil
- Subjects
Computer Science - Robotics - Abstract
In this paper, we study integrated estimation and control of soft robots. A significant challenge in deploying closed loop controllers is reliable proprioception via integrated sensing in soft robots. Despite the considerable advances accomplished in fabrication, modelling, and model-based control of soft robots, integrated sensing and estimation is still in its infancy. To that end, this paper introduces a new method of estimating the degree of curvature of a soft robot using a stretchable sensing skin. The skin is a spray-coated piezoresistive sensing layer on a latex membrane. The mapping from the strain signal to the degree of curvature is estimated by using a recurrent neural network. We investigate uni-directional bending as well as bi-directional bending of a single-segment soft robot. Moreover, an adaptive controller is developed to track the degree of curvature of the soft robot in the presence of dynamic uncertainties. Subsequently, using the integrated soft sensing skin, we experimentally demonstrate successful curvature tracking control of the soft robot., Comment: Preprint submitted to ICRA 2021: International Conference on Robotics and Automation
- Published
- 2020
14. Accelerating Distributed SGD for Linear Regression using Iterative Pre-Conditioning
- Author
-
Chakrabarti, Kushal, Gupta, Nirupam, and Chopra, Nikhil
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Systems and Control ,Statistics - Machine Learning - Abstract
This paper considers the multi-agent distributed linear least-squares problem. The system comprises multiple agents, each agent with a locally observed set of data points, and a common server with whom the agents can interact. The agents' goal is to compute a linear model that best fits the collective data points observed by all the agents. In the server-based distributed settings, the server cannot access the data points held by the agents. The recently proposed Iteratively Pre-conditioned Gradient-descent (IPG) method has been shown to converge faster than other existing distributed algorithms that solve this problem. In the IPG algorithm, the server and the agents perform numerous iterative computations. Each of these iterations relies on the entire batch of data points observed by the agents for updating the current estimate of the solution. Here, we extend the idea of iterative pre-conditioning to the stochastic settings, where the server updates the estimate and the iterative pre-conditioning matrix based on a single randomly selected data point at every iteration. We show that our proposed Iteratively Pre-conditioned Stochastic Gradient-descent (IPSG) method converges linearly in expectation to a proximity of the solution. Importantly, we empirically show that the proposed IPSG method's convergence rate compares favorably to prominent stochastic algorithms for solving the linear least-squares problem in server-based networks., Comment: Changes in the replacement: Application to distributed state estimation problem has been added in Appendix B. Related articles: arXiv:2003.07180v2 [math.OC] and arXiv:2008.02856v1 [math.OC]
- Published
- 2020
15. Iterative Pre-Conditioning for Expediting the Gradient-Descent Method: The Distributed Linear Least-Squares Problem
- Author
-
Chakrabarti, Kushal, Gupta, Nirupam, and Chopra, Nikhil
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Systems and Control ,Statistics - Machine Learning - Abstract
This paper considers the multi-agent linear least-squares problem in a server-agent network. In this problem, the system comprises multiple agents, each having a set of local data points, that are connected to a server. The goal for the agents is to compute a linear mathematical model that optimally fits the collective data points held by all the agents, without sharing their individual local data points. This goal can be achieved, in principle, using the server-agent variant of the traditional iterative gradient-descent method. The gradient-descent method converges linearly to a solution, and its rate of convergence is lower bounded by the conditioning of the agents' collective data points. If the data points are ill-conditioned, the gradient-descent method may require a large number of iterations to converge. We propose an iterative pre-conditioning technique that mitigates the deleterious effect of the conditioning of data points on the rate of convergence of the gradient-descent method. We rigorously show that the resulting pre-conditioned gradient-descent method, with the proposed iterative pre-conditioning, achieves superlinear convergence when the least-squares problem has a unique solution. In general, the convergence is linear with improved rate of convergence in comparison to the traditional gradient-descent method and the state-of-the-art accelerated gradient-descent methods. We further illustrate the improved rate of convergence of our proposed algorithm through experiments on different real-world least-squares problems in both noise-free and noisy computation environment., Comment: Update: figures for the rest of the datasets have been added
- Published
- 2020
- Full Text
- View/download PDF
16. Preserving Statistical Privacy in Distributed Optimization
- Author
-
Gupta, Nirupam, Gade, Shripad, Chopra, Nikhil, and Vaidya, Nitin H.
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Distributed, Parallel, and Cluster Computing - Abstract
We present a distributed optimization protocol that preserves statistical privacy of agents' local cost functions against a passive adversary that corrupts some agents in the network. The protocol is a composition of a distributed ``{\em zero-sum}" obfuscation protocol that obfuscates the agents' local cost functions, and a standard non-private distributed optimization method. We show that our protocol protects the statistical privacy of the agents' local cost functions against a passive adversary that corrupts up to $t$ arbitrary agents as long as the communication network has $(t+1)$-vertex connectivity. The ``{\em zero-sum}" obfuscation protocol preserves the sum of the agents' local cost functions and therefore ensures accuracy of the computed solution., Comment: The updated version has simpler proofs. The paper has been peer-reviewed, and accepted for the IEEE Control Systems Letters (L-CSS 2021)
- Published
- 2020
17. Iterative Pre-Conditioning to Expedite the Gradient-Descent Method
- Author
-
Chakrabarti, Kushal, Gupta, Nirupam, and Chopra, Nikhil
- Subjects
Mathematics - Optimization and Control ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Systems and Control ,Statistics - Machine Learning - Abstract
This paper considers the problem of multi-agent distributed optimization. In this problem, there are multiple agents in the system, and each agent only knows its local cost function. The objective for the agents is to collectively compute a common minimum of the aggregate of all their local cost functions. In principle, this problem is solvable using a distributed variant of the traditional gradient-descent method, which is an iterative method. However, the speed of convergence of the traditional gradient-descent method is highly influenced by the conditioning of the optimization problem being solved. Specifically, the method requires a large number of iterations to converge to a solution if the optimization problem is ill-conditioned. In this paper, we propose an iterative pre-conditioning approach that can significantly attenuate the influence of the problem's conditioning on the convergence-speed of the gradient-descent method. The proposed pre-conditioning approach can be easily implemented in distributed systems and has minimal computation and communication overhead. For now, we only consider a specific distributed optimization problem wherein the individual local cost functions of the agents are quadratic. Besides the theoretical guarantees, the improved convergence speed of our approach is demonstrated through experiments on a real data-set., Comment: Accepted for the proceedings of the 2020 American Control Conference
- Published
- 2020
18. Privacy of Agents' Costs in Peer-to-Peer Distributed Optimization
- Author
-
Gupta, Nirupam and Chopra, Nikhil
- Subjects
Computer Science - Systems and Control - Abstract
In this paper, we propose a protocol that preserves (statistical) privacy of agents' costs in peer-to-peer distributed optimization against a passive adversary that corrupts certain number of agents in the network. The proposed protocol guarantees privacy of the affine parts of the honest agents' costs (agents that are not corrupted by the adversary) if the corrupted agents do not form a vertex cut of the underlying communication topology. Therefore, if the (passive) adversary corrupts at most t arbitrary agents in the network then the proposed protocol can preserve the privacy of the affine parts of the remaining honest agents' costs if the communication topology has (t+1)-connectivity. The proposed privacy protocol is a composition of a privacy mechanism (we propose) with any (non-private) distributed optimization algorithm., Comment: arXiv admin note: text overlap with arXiv:1809.01794, arXiv:1903.09315
- Published
- 2019
19. Statistical Privacy in Distributed Average Consensus on Bounded Real Inputs
- Author
-
Gupta, Nirupam, Katz, Jonathan, and Chopra, Nikhil
- Subjects
Computer Science - Cryptography and Security ,Computer Science - Information Theory ,Computer Science - Systems and Control - Abstract
This paper proposes a privacy protocol for distributed average consensus algorithms on bounded real-valued inputs that guarantees statistical privacy of honest agents' inputs against colluding (passive adversarial) agents, if the set of colluding agents is not a vertex cut in the underlying communication network. This implies that privacy of agents' inputs is preserved against $t$ number of arbitrary colluding agents if the connectivity of the communication network is at least $(t+1)$. A similar privacy protocol has been proposed for the case of bounded integral inputs in our previous paper~\cite{gupta2018information}. However, many applications of distributed consensus concerning distributed control or state estimation deal with real-valued inputs. Thus, in this paper we propose an extension of the privacy protocol in~\cite{gupta2018information}, for bounded real-valued agents' inputs, where bounds are known apriori to all the agents., Comment: Accepted for 2019 American Control Conference. arXiv admin note: substantial text overlap with arXiv:1809.01794
- Published
- 2019
20. Information-Theoretic Privacy in Distributed Average Consensus
- Author
-
Gupta, Nirupam, Katz, Jonathan, and Chopra, Nikhil
- Subjects
Electrical Engineering and Systems Science - Systems and Control - Abstract
We present a distributed average consensus protocol that preserves the privacy of agents' inputs. Unlike the differential privacy mechanisms, the presented protocol does not affect the accuracy of the output. It is shown that the protocol preserves the information-theoretic privacy of the agents' inputs against colluding passive adversarial (or honest-but-curious) agents in the network, if the adversarial agents do not constitute a vertex cut in the underlying communication network. This implies that we can guarantee information-theoretic privacy of all the honest agents' inputs against $t$ arbitrary colluding passive adversarial agents if the network is $(t+1)$-connected. The protocol is constructed by composing a distributed privacy mechanism that we propose with any (non-private) distributed average consensus algorithm., Comment: Related to the prior work (1) Gupta, Nirupam, Jonathan Kat, and Nikhil Chopra. "Statistical Privacy in Distributed Average Consensus on Bounded Real Inputs." 2019 IEEE American Control Conference, and (2) Gupta, Nirupam, Jonathan Katz, and Nikhil Chopra. "Privacy in distributed average consensus." IFAC-PapersOnLine 2017. Comprises 7 pages (two-column format), and 3 figures
- Published
- 2018
21. Passivity-Based Distributed Optimization with Communication Delays Using PI Consensus Algorithm
- Author
-
Hatanaka, Takeshi, Chopra, Nikhil, Ishizaki, Takayuki, and Li, Na
- Subjects
Computer Science - Systems and Control - Abstract
In this paper, we address a class of distributed optimization problems in the presence of inter-agent communication delays based on passivity. We first focus on unconstrained distributed optimization and provide a passivity-based perspective for distributed optimization algorithms. This perspective allows us to handle communication delays while using scattering transformation. Moreover, we extend the results to constrained distributed optimization, where it is shown that the problem is solved by just adding one more feedback loop of a passive system to the solution of the unconstrained ones. We also show that delays can be incorporated in the same way as the unconstrained problems. Finally, the algorithm is applied to a visual human localization problem using a pedestrian detection algorithm., Comment: 13 pages, 10 figures, submitted to IEEE Transactions on Automatic Control
- Published
- 2016
22. Decentralized Event-Triggering for Control of Nonlinear Systems
- Author
-
Tallapragada, Pavankumar and Chopra, Nikhil
- Subjects
Computer Science - Systems and Control ,Mathematics - Optimization and Control - Abstract
This paper considers nonlinear systems with full state feedback, a central controller and distributed sensors not co-located with the central controller. We present a methodology for designing decentralized asynchronous event-triggers, which utilize only locally available information, for determining the time instants of transmission from the sensors to the central controller. The proposed design guarantees a positive lower bound for the inter-transmission times of each sensor, while ensuring asymptotic stability of the origin of the system with an arbitrary, but priorly fixed, compact region of attraction. In the special case of Linear Time Invariant (LTI) systems, global asymptotic stability is guaranteed and scale invariance of inter-transmission times is preserved. A modified design method is also proposed for nonlinear systems, with the addition of event-triggered communication from the controller to the sensors, that promises to significantly increase the average sensor inter-transmission times compared to the case where the controller does not transmit data to the sensors. The proposed designs are illustrated through simulations of a linear and a nonlinear example.
- Published
- 2013
- Full Text
- View/download PDF
23. On Event Triggered Tracking for Nonlinear Systems
- Author
-
Tallapragada, Pavankumar and Chopra, Nikhil
- Subjects
Computer Science - Systems and Control ,Mathematics - Optimization and Control - Abstract
In this paper we study an event based control algorithm for trajectory tracking in nonlinear systems. The desired trajectory is modelled as the solution of a reference system with an exogenous input and it is assumed that the desired trajectory and the exogenous input to the reference system are uniformly bounded. Given a continuous-time control law that guarantees global uniform asymptotic tracking of the desired trajectory, our algorithm provides an event based controller that not only guarantees uniform ultimate boundedness of the tracking error, but also ensures non-accumulation of inter-execution times. In the case that the derivative of the exogenous input to the reference system is also uniformly bounded, an arbitrarily small ultimate bound can be designed. If the exogenous input to the reference system is piecewise continuous and not differentiable everywhere then the achievable ultimate bound is constrained and the result is local, though with a known region of attraction. The main ideas in the paper are illustrated through simulations of trajectory tracking by a nonlinear system., Comment: 8 pages, 3 figures. Includes proofs for all results
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.