Dr. Andrea Carron

alt text 

Senior Lecturer,
Institute for Dynamical Systems and Control (IDSC),
ETH Zurich
Sonneggstrasse 3
8092 Zurich, Switzerland
Phone: +41 044 632 04 85
E-post: carrona [@] ethz [DOT] ch

Projects

Collaborative Robotic Systems (Principal Investigator)

The objective of this project is to unify distributed decision-making and robot control, studying algorithms that are capable of making optimal decisions while taking into account constraints such as the laws of motion, actuation limits and sensing capabilities of the robots.

Learning-based Control for Robotic Arms

High-precision trajectory tracking is fundamental in robotic manipulation. While industrial robots address this through stiffness and high-performance hardware, compliant and cost-effective robots require advanced control to achieve accurate position tracking. In this project, we propose a model-based control approach, which makes use of data gathered during operation to improve the model of the robotic arm and thereby the tracking performance. The proposed scheme is based on an inverse dynamics feedback linearization and a data-driven error model, which are integrated into a model predictive control formulation. In particular, we show how offset-free tracking can be achieved by augmenting a nominal model with both a Gaussian process, which makes use of offline data, and an additive disturbance model suitable for efficient online estimation of the residual disturbance via an extended Kalman filter. The performance of the proposed offset-free GPMPC scheme is demonstrated on a compliant 6 degrees of freedom robotic arm, showing significant performance improvements compared to other robot control algorithms.


Safe Learning for Distributed Systems

Learning in interacting dynamical systems can lead to instabilities and violations of critical safety constraints, which is limiting its application to constrained system networks. This work introduces two safety frameworks that can be applied together with any learning method for ensuring constraint satisfaction in a network of uncertain systems, which are coupled in the dynamics and in the state constraints. The proposed techniques make use of a safe set to modify control inputs that may compromise system safety, while accepting safe inputs from the learning procedure. Two different safe sets for distributed systems are proposed by extending recent results for structured invariant sets. The sets differ in their dynamical allocation to local sets and provide different trade-offs between required communication and achieved set size. The proposed algorithms are proven to keep the system in the safe set at all times and their effectiveness and behavior is illustrated in a numerical example.


Probabilistic Invariant Sets for Linear Systems

Dynamical systems with stochastic uncertainties are ubiquitous in the field of control, with linear systems under additive Gaussian disturbances a most prominent example. The concept of probabilistic invariance was introduced to extend the widely applied concept of invariance to this class of problems. Computational methods for their synthesis, however, are limited. In this work we present a relationship between probabilistic and robust invariant sets for linear systems, which enables the use of well-studied robust design methods. Conditions are shown, under which a robust invariant set, designed with a confidence region of the disturbance, results in a probabilistic invariant set. We furthermore show that this condition holds for common box and ellipsoidal confidence regions, generalizing and improving existing results for probabilistic invariant set computation. We finally exemplify the synthesis for an ellipsoidal probabilistic invariant set. Two numerical examples demonstrate the approach and the advantages to be gained from exploiting robust computations for probabilistic invariant sets.

Scalable Model Predictive Control of Autonomous Mobility on Demand Systems

Technological advances in self driving vehicles will soon enable the implementation of large-scale mobility-on-demand systems with autonomous agents. The efficient management of the vehicle fleet remains a key challenge, in particular for enabling a demand-aligned distribution of available vehicles, commonly referred to as rebalancing. In this work we present a discrete-time model of an autonomous mobility-on-demand system, in which unit capacity self driving vehicles serve transportation requests consisting of a (time, origin, destination) tuple on a directed graph. Time delays in the discrete time model are approximated as first-order lag elements yielding a sparse model suitable for model-predictive control. The well-posedness of the model is demonstrated and a characterization of its equilibrium points is given. Furthermore, we show the stabilizability of the model and propose a scalable model-predictive control scheme with complexity that scales linearly with the size of the city. We verify the performance of the scheme in a multi-agent transport simulation and demonstrate that service levels outperform those of existing rebalancing schemes at identical fleet sizes.


Gaussian Process Regression via Kalman Filtering

In this project, we study the problem of efficient non-parametric estimation for non-linear time-space dynamic Gaussian processes (GP). We propose a systematic and explicit procedure to address this problem by pairing GP regression with Kalman Filtering. Under a specific separability assumption of the modeling kernel and periodic sampling on a (possibly non-uniform) space-grid, we show how to build an exact finite dimensional discrete-time state-space representation for the modeled process. The major finding is that the state at instant k of the associated Kalman Filter represents a sufficient statistic to compute the minimum variance prediction of the process at instant k over any arbitrary finite subset of the space.


Coverage Control under Unknown Sensory Functions

We consider a scenario where the aim of a group of agents is to perform the optimal coverage of a region according to a sensory function. In particular, centroidal Voronoi partitions have to be computed. The difficulty of the task is that the sensory function is unknown and has to be reconstructed on line from noisy measurements. Hence, estimation and coverage needs to be performed at the same time. We cast the problem in a Bayesian regression framework, where the sensory function is seen as a Gaussian random field. Then, we design a set of control inputs which try to well balance coverage and estimation, also discussing convergence properties of the algorithm.


Multi-agent Hitting Time

This work provides generalized notions and analysis methods for the hitting time of random walks on graphs. The hitting time, also known as the Kemeny constant or the mean first passage time, of a random walk is widely studied; however, only limited work is available for the multiple random walker scenario. In this work we provide a novel method for calculating the hitting time for a single random walker as well as the first analytic expression for calculating the hitting time for multiple random walkers, which we denote as the group hitting time. We also provide a closed form solution for calculating the hitting time between specified nodes for both the single and multiple random walker cases. Our results allow for the multiple random walks to be different and, moreover, for the random walks to operate on different subgraphs. Finally, using sequential quadratic programming, we show that the combination of transition matrices that generate the minimal group hitting time for various graph topologies is often different.

Relative Measurements Consensus

In this proect, we address the problem of optimal estimating the position of each agent in a network from relative noisy vectorial distances with its neighbors. Although the problem can be cast as a standard least-squares problem, the main challenge is to devise scalable algorithms that allow each agent to estimate its own position by means of only local communication and bounded complexity, independently of the network size and topology. We propose a consensus-based algorithm with the use of local memory variables which allows asynchronous implementation, has guaranteed exponential convergence to the optimal solution under mild deterministic and randomised communication protocols, and requires minimal packet transmission. In the randomized scenario we then study the rate of convergence in expectation of the estimation error and we argue that it can be used to obtain upper and lower bound for the rate of converge in mean square. In particular, we show that for regular graphs the convergence rate in expectation is reduced by a factor N, which is the number of nodes, which is the same asymptotic degradation of memory-less asynchronous consensus algorithms. Additionally, we show that the asynchronous implementation is also robust to delays and communication failures.

ARCADE

The Autonomous Rendezvous, Control And Docking Experiment (ARCADE) is a technology demonstrator aiming to prove automatic attitude determination and control, rendezvous and docking capabilities for small scale spacecraft and aircraft. The development of such capabilities could be fundamental to create, in the near future, fleets of cooperative, autonomous unmanned aerial vehicles for mapping, surveillance, inspection and remote observation of hazardous environments; small-class satellites could also benefit from the employment of docking systems to extend and reconfigure their mission profiles. ARCADE is designed to test these technologies on a stratospheric flight on board the BEXUS-17 balloon, allowing to demonstrate them in a harsh environment subjected to gusty winds and high pressure and temperature variations.


Cooperative and Competitive Receding Horizon Control Algorithms

We consider the problem of controlling two dynamically decoupled agents which can cooperate or compete. Agents are modelled as linear discrete time systems, and collect each other’s state information without delays. Control actions are computed using a Receding Horizon framework, where each agent’s controllers are computed by minimizing a linear, quadratic cost function which depends on both agents’ states. Cooperation or competition is specified throught the state tracking objectives of each agent. We do not consider state constraints. The simplicity of our framework allows us to provide the following results analytically: 1) When agents compete, their states converge to an equilibrium trajectory where the steady state tracking error is finite. 2) Limit-cycles cannot occur. Numerical simulations and experiments done with a LEGO Mindstorm multiagent platform match our analytical results.

} {