Magnus Egerstedt

Executive Director
Institute for Robotics and Intelligent Machines
Professor and Julian T. Hightower Chair in Systems and Control
School of Electrical and Computer Engineering
Georgia Institute of Technology

Phone +1 404 894-3484
Fax +1 404 894-4641 (School of ECE)
Office TSRB 436B
School of Electrical and Computer Engineering
Georgia Institute of Technology
Atlanta, GA 30332, USA

Sponsored Projects
Autonomy as a Service (2017 - 2020)
Composition of Coordinated Behaviors for Autonomous Vehicle Teams (2017 - 2018)
An Open, Remotely Accessible Testbed for Heterogeneous, Autonomous Teams (2017 - 2020)
Broadband Communication for Heterogeneous Underwater Networks (2016 - 2018)
Secure, Open-Access Multi-Robot Systems (2015 - 2018)
The Robotarium: A Remote-Access Testbed for Robotics Research (2015 - 2019)
Robot Ecologies: Biologically Inspired Heterogeneous Teams (2015 - 2018)

Mutually Stabilized Correction in Physical Demonstration (2013 - 2016)
Motion Coordination and Adaptation Using Deception and Human Interactions (2013 - 2016)
Hybrid Control Tools for Power Management in Cyber-Physical Systems (2012 - 2016)
Cyber-Physical Security in Remote-Access Autonomous Systems (2015)
RoboTools: A Multi-Robot Scripting Language (2015)
Minimal Representation and Reasoning for Networked Autonomous Agents (2012 - 2015)
Heterogeneous Unmanned Networked Teams (2008 - 2014)
Distributed Cyber-Physical Architectures for Green Electricity Networks (2012 - 2014)
Social Swarming Educational Robots (2014)
Low-Cost Educational Robots (2013 - 2014)
Motion Grammar Laboratory (2011 - 2013)
A Unified Algorithmic Framework for the DARPA Robotics Challenge (2012 - 2013)
Pilot Decision Support for Controlling Multiple UAVs (2007 - 2013)
An Educational Infrastructure for Teams of Simulated and Real Mobile Robots (2012 - 2013)
Human-Swarm Interactions for Multi-Robot Teams (2011 - 2012)
Mid-Level Planning and Control for Articulated Locomoting Systems (2011 - 2012)
Puppet Choreography and Automated Marionettes (2008 - 2012)
Abstraction-Based Motion Programs for Complex Systems (2008 - 2011)
ZORRO: The Robotic Fencing System (2009 - 2010)
Reconfigurable Sensor Networks for Fault-Tolerant In-Situ Sampling (2006 - 2009)
Optimal, Multi-Modal Control of Complex Systems (2005 - 2009)
Decentralized Algorithms for Locally Interacting Mobile Robots (2005 - 2008)
Learning Perception, Controllers and Visual Feature Graphs for Ground Robots (2004 - 2008)
Linguistic Control of Mobile Robots (2003 - 2008)
Sting Racing: DARPA Urban Grand Challenge (2006 - 2007)
What are the X's Doing? (2006 - 2007)
Communications in Embedded Control Systems (2002 - 2005)
Hybrid, Data-Driven Control in Electronics Manufacturing (2002 - 2004)


Autonomy as a Service
Sponsor: National Science Foundation
Duration: Aug. 2017 - July 2020

Project description: By now, teams of unmanned vehicles have been successfully deployed in the air, on the ground, and under water. But, This proposal investigates the concept of “Autonomy as a Service”, i.e., the ability of a group of autonomous robots, deployed in an environment over a sustained period of time, to be recruited and commanded to perform a wide variety of tasks in response to user specifications, while ensuring that the basic integrity of the system is preserved. In this proposal, we investigate what this concept entails scientifically in the context of sustained and persistent monitoring tasks. To achieve this, three main research themes are pursued: (i) From Specification to Execution: The users must be able to recruit and task the robots with new missions, which calls for formally correct ways of going from high-level specifications to coordinated control programs for the robots to execute. These specifications must moreover be composable in the sense that they can be directly combined with other specifications to support increasingly complex missions. We will pursue this through hierarchical compositions of Signal Temporal Logic formulae for scripting the missions. (ii) Resilient Autonomy: When delivering a system that can be commanded to perform tasks over long periods of time, the first concern must be to preserve the integrity of the system itself, i.e., basic functionality must be ensured even as the robot team is recruited to perform a particular set of tasks. By encoding this consideration as forward invariant properties (if the system starts safe, it stays safe) through constraints obtained from control barrier certificates, the resilience requirements can be guaranteed. (iii) Trajectory Based Learning from Massive Data Sets: The agent team must be able to assess the performance of whatever it is that they are monitoring. This, in turn, implies that models (as Signal Temporal Logic formulae) must be effectively learned from massive data sets that must be appropriate compressed and represented.

Composition of Coordinated Behaviors for Autonomous Vehicle Teams
Sponsor: DARPA
Duration: July 2017 - June 2018

Project description: By now, teams of unmanned vehicles have been successfully deployed in the air, on the ground, and under water. But, to manage such a fleet of unmanned vehicles is an unwieldy task. Going from a high-level mission to a sequence of such geometrically defined, coordinated motion primitives is not only a labor and time intensive exercise, it also requires additional monitoring and oversight to ensure that the team can effectively transition among the behaviors. Ideally, a system operator that is commanding and controlling a vehicle team should be able to use off-the-shelf coordinated behaviors from a number of sources and instead focus on the mission as well as potentially adapt to changing environmental conditions, rather than spending time and focus on ensuring that low-level constraints are being satisfied by the team. This proposal aims at delivering such a separation of concerns, where it is possible to script and deploy coordinated behaviors for heterogeneous teams of unmanned vehicles, while ensuring that both the constraint satisfaction and the validity of behavioral transitions are handled autonomously by the vehicles themselves. To this end, a framework for Composition of Correct-by-Construction Collective Behaviors (C3B) will be developed that supports the idea of Modular Autonomy, where individual autonomous, coordinated behaviors can be designed and tested in isolation, and then subsequently be composed together in a provably correct manner.

DURIP: An Open, Remotely Accessible Testbed for Heterogeneous, Autonomous Teams
Sponsor: ONR
Duration: June 2017 - May 2020

Project description: This DURIP project concerns the development of an open, remotely accessible multi-robot re- search facility where researchers across the country can prototype, test, and validate algorithms for coordinated autonomy on teams of heterogeneous unmanned systems. Such a facility would provide researchers with access to a state-of-the-art research facility and would allow for rapid deployment of control and coordination programs on mixed air-ground robotic teams.

Broadband Communication for Heterogeneous Underwater Networks
Sponsor: ONR
Duration: Aug. 2016 - Dec. 2018

Project description: This proposal will take the next giant step toward true broadband underwater RF two-way transmissions. Based on preliminary modeling and lab experimentation, we will demonstrate new enhanced capabilities with modern broadband digital communication techniques coupled with innovative impedance-matched antennas designed specifically for undersea transmission at up to 1 Mbps data rates. Implementing a hybrid system of EM, acoustics and optics will produce a capability to fill a much needed gap for the US Navy. For complex missions in unknown environments with multiple dynamic targets, employing teams of undersea assets both dynamic and static provides a methodology for autonomous optimization. The proposed system is a fully distributed approach for multiple static and dynamic assets performing multiple tasks in the complex undersea environment.

Secure, Open-Access Multi-Robot Systems
Sponsor: National Science Foundation
Duration: Oct. 2015 - Sept. 2018

Project description: This proposal addresses the safety and security issues that arise when giving users remote-access to a multi-robot research test-bed, where mobile robots can coordinate their behaviors in a collaborative manner. Through a public interface, users are able to schedule, and subsequently upload, their own code and run their experiments, while being provided with the scientific data produced through the experiment. Such an open-access framework has the potential to significantly lowering the barriers to entry in robotics research and education, yet is inherently vulnerable from a safety and security point-of-view. This proposal aims at the development and definition of appropriate cyber-physical security notions, formal verification algorithms, and safety-critical, real-time control code for teams of mobile robots that will ultimately make such a system both useful and safe. On top of the research developments, this proposal contains a Transition to Practice component that will allow the system to become a highly usable, shared test-bed; one that can serve as a model for other open, remote-access test-beds.

The Robotarium: A Remote-Access Testbed for Robotics Research
Sponsor: National Science Foundation
Duration: Sept. 2015 - Aug. 2019

Project description: In the near future, robot-assisted technologies have the potential to improve individuals qualify of life in almost every aspect of society. However, in order to realize this future, access to instruments that enable discovery in the area of robotics must be ensured. In this regard, this project develops the Robotarium, a world-class, shared multi-robot research and education facility, remotely and locally accessible to users across different educational institutions and age groups. Through an online, open, public interface, users will be able to schedule and run their experiments, while being provided with both streaming video of the experiment as well as the scientific data produced through the experiment. The expected outcome is a first-of-its-kind multi-robot research platform that can be utilized by researchers, educators, and students, without incurring the prohibitive costs associated with setting up and maintaining a suitably-equipped research facility. Even more important, however, the expected outcome goes beyond access. A research instrument like the Robotarium has the potential to build stronger networks of collaborative research, thus making the whole significantly larger than the sum of its parts. As such, the end result has the potential to show how remote access research instruments can be structured in other areas beyond robotics.

Robot Ecologies: Biologically Inspired Heterogeneous Teams
Sponsor: Office for Naval Research
Duration: Apr. 2015 - Mar. 2018

Project description: This project constitutes a systematic investigation of biologically inspired principles for prescribing and analyzing capabilities, roles, and control strategies for different individuals in teams of autonomous agents. In particular, the role of heterogeneity is studied from a biologically inspired vantage-point in order to enable teams to respond to environmental changes in robust, effective, safe, and predictable ways. We propose to shift focus from the traditional coordination problem for networked, autonomous agents -- focusing on how control and coordination strategies should be designed for a particular set of objectives and tasks -- to the question of what these objectives and tasks should be? For example, we know how to do formation control, but {what} formations should we use in the first place? Similarly, we know how to assign roles to different agents and incorporate heterogeneity into teams. However, we are yet to study what heterogeneous roles are beneficial given a target application? Under the banners of bio-inspired and bio-mimetic multi-agent robotics, we propose to investigate if there is an argument to be made for having robots with different capabilities and characteristics working together, such as slow robots and fast robots, in real-world settings.


Mutually Stabilized Correction in Physical Demonstration
Sponsor: National Science Foundation
Duration: Oct. 2013 - Sept. 2016

Project description: As cyber-physical systems emerge around us, meaningful and effective interactions with such systems become increasingly important. Moreover, the physical side of CPS implies that these interactions are not solely to be understood in terms of useful interfaces, but also in terms of shared autonomy, where the embedded software and the human together have to interface directly with the system dynamics. This proposal aims at addressing this issue by balancing the ability of a person to direct a cyber-physical system, typically in terms of infeasible system evolutions, against the system's representation of its own capabilities and limitations. The approach we propose utilizes techniques from control theory, hybrid systems theory, system identification, numerical methods for dynamic systems, machine learning, and learning from demonstration. A key aspect of the proposed work involves developing a formal notion of trust between an embedded system and a human operator that evolves over time based on the performance and robustness of the instruction a human operator provides. Application domains range broadly, from rehabilitation assistive machines, to human-robot interactions during manufacturing, to interconnected systems like air traffic control.

Motion Coordination and Adaptation Using Deception and Human Interactions
Sponsor: AFOSR
Duration: Feb. 2013 - Jan. 2016

Project description: This project investigates fundamental new tools and techniques for how to structure the coordination and control strategies in teams of mobile robots. In particular, two general thrust areas are identified, focusing on human-swarm interactions and deception-based motion control strategies. Although interesting in their own rights, the unifying theme behind these two different thrusts is the notion of intent, where the first thrust, which can be though of as evolving at a higher level of abstraction, focuses on how user intent can be injected into a network of mobile agents in a fundamentally sound manner. The second thrust, in turn, focuses on how the intent can be hidden in order to produce effective, deception-based coordination and pursuit strategies.

Hybrid Control Tools for Power Management in Cyber-Physical Systems
Sponsor: NSF
Duration: Oct. 2012 - Sept. 2016

Project description: This project strikes a balance between performance considerations and power consumption in cyber-physical systems. In particular, by switching among different modes of operation (e.g., lowpower/ high-power, on/off, or mobile/static), power maintenance and optimization can be cast as a hybrid optimal control problem. As such, the main theoretical contribution in this proposal is a computational, realtime, hybrid optimal control framework. In order to showcase the applicability of the proposed framework, it is connected to three distinct target applications, namely mobile sensor networks, distributed infrastructure management, and multi-core embedded processes.

Cyber-Physical Security in Remote-Access Autonomous Systems
Sponsor: Institute for Robotics and Intelligent Machines at Georgia Tech
Duration: Jan. 2015 - July 2015

Project description: One reason why autonomous robots are expected to have significant impact is their ability to operate in remote or dangerous areas where human presence may be undesirable, or even impossible. As such, the robots must be remotely accessible by human operators in order to facilitate updates to tasks, missions, programs, or configurations. As a result, significant work has been done on teleoperation and on shared-autonomy strategies that can enable this type of remote-access capability. However, as one grants remote access to physical assets, a whole new set of security issues open up. How, for example, would one ensure that the new control program is safe? This is not a traditional cyber-security issue, nor is it a traditional robotics or autonomy issue. Instead, it resides in the emerging area of cyber-physical security, which is what this autonomy cluster seed proposal is proposing to investigate in the particular context of autonomous robots.

RoboTools: A Multi-Robot Scripting Language
Sponsor: Georgia Research Alliance
Duration: Feb. 2015 - Aug. 2015

Project description: The RoboTools project is developing a collection of software tools for designing, monitoring, and interfacing with large-scale teams of autonomous robots using proprietary human-robot interaction technologies. The primary application of the tools is a single user configuring coordination of multiple robots to achieve a common goal. The primary goal of the RoboTools project is to assess the viability of a start-up venture based on the design of software tools for multi-robot coordination in the agriculture industry via prototype design, development, and testing. As such, the focus is on technology development and the outcome of the project is a minimum viable product.

Minimal Representation and Reasoning for Networked Autonomous Agents
Sponsor: AFOSR
Duration: May 2012 - Apr. 2015

Project description: This project addresses fundamental issues that arise in information representation architectures for autonomous reasoning and learning, decentralized planning, and decision-making in multiagent systems. The overall goal of the project is to develop efficient and adaptive strategies to process, represent, exchange, and act upon relevant information from massive data collections, much of which can be irrelevant, imprecise, and contradictory. This project takes an ambitious approach to handle the collection, representation, and organization of information. Minimalism is at the core of the technical approach: this idea concerns the proper identification of the required information needed to achieve a given task with a desired performance level and provable performance guarantees. Minimal representations involve how appropriate models should be selected, how uncertainty should be managed, and how information should be represented, decomposed and communicated. Key role within this framework play set based approaches in order to perform information decomposition and synchronization for distributed filtering, and representation for meta-reasoning and coordination.

MURI: Heterogeneous Unmanned Networked Teams
Sponsor: ONR
Duration: Aug. 2008 - Dec. 2014

Project description: Future Naval Combat Operations and Systems will entail small expeditionary forces with light combat ships, high altitude long endurance vehicles, tactical UAVs, and unmanned underwater vehicles, which must monitor and protect large and complex areas continuously. These Heterogeneous Unmanned Networked Teams (HUNT) must be able to search for potential threats, identify them, track them, and take appropriate action to neutralize them. Because of the dynamic nature of the battleeld, HUNT teams must rapidly allocate and task dierent assets to support time-critical intelligence needs, and re- allocate and retask assets in response to the detection of threats or changes in missions. Pushing the state-of-the-art will require a broader perspective in addressing a variety of hard problems. Sophisticated cooperation mechanisms among intelligent biological organisms, in- cluding humans, will oer critical insight and solution templates for many hard engineering problems. To meet the HUNT challenge we have a assembled an interdisciplinary team of leading researchers who have pioneered work in articial intelligence, vehicle control and robotics, cognitive psychology and human factors, biology, and political economics. Due to the emphasis on heterogeneous cooperation, we have not only assembled a team of vehicle engineers with expertise in traditionally separated domains (UGVs, UAVs, UUVs), but we have also assembled a Biological Think Tank, consisting of experts in principles of coop- eration in traditionally separated biological domains.

Distributed Cyber-Physical Architectures for Green Electricity Networks
Sponsor: ARPA-E
Duration: Jan. 2012 - Dec. 2014

Project description: This project proposes a comprehensive, backward compatible, incrementally deployable, and scalable control architecture based on distributed autonomous networked control and the emerging concept of electricity �prosumers� economically motivated energy ecosystems that can consume, produce or store electricity. The architecture's distributed intelligence and innovative control design reduces communications requirements and decentralizes control functions. A flat� electricity industry emerges wherein the actors expose various services related to individual and wide-area objectives, realizing an ultra-reliable internetwork for energy that will enable penetration of high levels of renewable energy and storage, numerous novel value propositions, and energy innovation. This project will advance the development leading to the demonstration of modules integrated in a cohesive framework: an autonomous utility prosumer energy management software, and the industry services computation infrastructure.

Social Swarming Educational Robots
Sponsor: National Science Foundation
Duration: Jan. 2014 - Dec. 2014

Project description: Through this program, recent research on multi-robot systems will be transitioned onto swarming, social, educational, programmable, robotic toys. The resulting product, the RoboSPRITE, will form and evolve relationships with other robots, which in turn manifest themselves through different types of coordinated interaction maneuvers. These maneuvers are moreover programmable, making the RoboSPRITE a potentially useful platform for learning not just programming, but other aspects of STEM subjects as well. Although some of these aspects are already represented in the marketplace at different levels of sophistication, no existing product combines all of them. The envisioned type of robot -- the RoboSPRITE -- has never existed before and the unique selling point is that the robots remember each other, form ``friendships" over time, and adaptively change their behaviors and interaction rules based on the history of their interactions. These interaction rules are moreover programmable, turning the toys into vehicles for learning robotics, coding, mathematics, and even biology in that different types of biological herding, hunting, and foraging strategies can be encoded. As such, broader impact can be found in the educational domain, where children will be exposed to STEM concepts in a playful setting.

Low-Cost Educational Robots
Sponsor: Texas Instruments and Mathworks
Duration: Dec. 2013 - June 2014

Project description: There are certainly plenty of robot platforms out there so an obvious question is: Why do we need yet another one? The answer to this question is price-point. To truly be able to reach a large number of students world-wide, the price needs to come down significantly. And, there are a number of ways in which price can come down. The most important of those is to decide what the robot needs to be able to do well, and what is not necessary. The answer to that question hinges on the context in which the robot will be used. This proposal's perspective is controls education, i.e., robotics as a tool for teaching control design. From a controls vantage point, it is important that highly accurate velocity control is available, both in the translational (speed) and rotational (angular velocity) directions. A differential-drive mechanism is perfect to this end, where two wheel speeds are independently controlled. The reason for this choice is threefold: First, as long as the servo motors have a high-enough resolution, high-fidelity motions can easily be produced. Second, the design is straightforward and the spatial dimensions can be kept small. This is actually very important since we are not intending to sell robots. Instead, we will make the design available open-source, with the expectation that robots can be easily assembled by the users themselves from standard components. Third, differential-drive robots have well-understood models, which makes them particularly useful in controls classes (linear as well as nonlinear). As such, we do not intend to skimp on the mobility part.

Motion Grammar Laboratory
Sponsor: National Science Foundation
Duration: Jan. 2011 - Dec. 2013

Project description: Reliable robot coworkers will be required to guarantee the completion of task-level objectives. Factory managers that assign skilled humans to mount a flexible cover on a car door have complete confidence that the task will be achieved. Skilled humans assemble the parts even when they have never before seen the particular cover, are unsure of the exact mounting points and must dig the cover out of a pile of parts. This level of under-specification in the task definition and uncertainty in object positions is beyond the capabilities of existing manipulation planners and control algorithms for robots. Instead of simply grasping and displacing the part from one point to another, humans perform a number of motions such as pushing away obstructing objects, trying initial alignments, re-grasping the part and test-fitting contacts. Not only do these motions appear arbitrary but sometimes humans even give up on one strategy and restart with another. Yet these motions cannot be arbitrary since the space of all displacements is exponentially large relative to the number of environment objects and human poses. Our theory is that human task-level manipulation has significant structure and the proposed Motion Grammar Laboratory will support our collaborative effort uncover it.

A Unified Algorithmic Framework for the DARPA Robotics Challenge
Sponsor: DARPA
Duration: Dec. 2012 - Dec. 2013

Project description: Humanoid robots require a degree of robustness which is more demanding than most electro-mechanical systems if they are to perform useful tasks in typical human-oriented environments. This project will focus on the development of a robust software architecture for the HUBO 2 robot developed by Rainbow Co., a company licensed by the Korean Advanced Institute of Science and Technology (KAIST). HUBO is becoming an increasingly popular platform for humanoid robotics research, and has many features in common with standard humanoid robots. The concept for this software architecture is meant to be extensible to other robot platforms, even those which are nonhumanoid. The cornerstone of this software is distribution of critical and non-critical tasks among independent processes while enabling seamless communication between them.

Pilot Decision Support for Controlling Multiple UAVs
Sponsor: Rockwell Collins, Inc.
Duration: Nov. 2007 - May 2013

Project description: Arguably, the biggest challenge facing the successful deployment of unmanned aerial vehicles (UAVs) in unstructured environments is the level of human involvement needed to carry out the mission. In fact, control and coordination of UAVs typically involve a many-to-one mode of operation in that multiple operators are needed in order to control a single UAV. The explicit purpose of this work is to invert this relationship, i.e. to enable a single pilot to control and coordinate multiple unmanned vehicles. This will allow the pilots to operate much more effectively, and will moreover enable the pilots to coordinate and exploit capability synergies between different UAVs to accomplish the mission objectives more effectively. In particular, the following areas will be covered: 1. Methods for evaluating the progression toward the completion of the mission. 2. Automatic ranking of the vehicles by their impact on the mission objectives, if controlled by the pilot. 3. Autonomous coordination of the vehicles. Moreover, the performance of the algorithms will be demonstrated against operational scenarios in a 3D simulation environment.

An Educational Infrastructure for Teams of Simulated and Real Mobile Robots
Sponsor: MathWorks
Duration: May 2012 - Apr. 2013

Project description: This proposal aims at overcoming the inherent tension found between theoretical developments in robotics and their practical application to real hardware systems. In particular, this objective will be achieved through the production of a MATLAB-based infrastructure that supports seamless transitions between simulated robots and real robots, to be used in controls and robotics courses at both the undergraduate and the graduate levels at the Georgia Institute of Technology. When teaching controls of mobile robots, two contradictory forces are in play. First, there is always a rather daunting gap between the theoretical developments covered in traditional lecture and textbook settings, and the transition of theoretical tools and techniques into executable code. Another, equally daunting, gap can be found when transitioning between simulations and actual hardware platforms. Second, despite these gaps, the reason why theoretical concepts and tools are stressed is, to a large degree, due to the fact that they are not only inherently useful, but also due to the fact that without them, control of complex physical systems would not be possible.

Human-Swarm Interactions for Multi-Robot Teams
Sponsor: Robotics and Intelligent Machines at Georgia Tech
Duration: Dec. 2011 - Aug. 2012

Project description: Imagine that you are surrounded by a million robot mosquitos and you have a single joystick that you can use for interacting with the swarm. How should this interaction be structured? This question is at the heart of this proposal and one can directly identify two naive yet ruefully inappropriate interaction models. The first is to point the joystick at individual mosquitos and drag them, one at a time, to some target location/configuration. This would mean that the operator would have to issue one million instructions, which is clearly not the right way of structuring this interaction. The second approach is to select some virtual point, such as the center of mass of the swam, and then move that point. This is easy for the operator. Unfortunately, each robot mosquito needs to know where they are relative to the center of mass, which in turn implies that each mosquito needs to know the location of one million mosquitos, which is clearly not scalable or even desirable. The alternative is that the location of the virtual point is broadcast to all mosquitos, which again is not scalable. What this highly simplified thought experiment reveals is that we need appropriate abstractions of the swarm, e.g., into subsets of agents or with respect to the dynamics, that allows for effective� interactions. These abstractions must be valid in the sense that they can be controlled, i.e., the operator can use the abstractions to achieve the desired performance, and they must provide the operator with sufficient information about the inner workings of the swarm. Moreover, we need to understand the appropriate communication modalities that humans can use with the swarm (gesture/speech/joystick), and how the swarm can use its decentralized sensors to best perceive input from the human operator.

Mid-Level Planning and Control for Articulated Locomoting Systems
Sponsor: DARPA
Duration: Jan. 2011 - Dec. 2012

Project description: Researchers excel at creating demonstrations that work well in controlled laboratory settings. However, the real world has less structure, more uncertainty, and the possibility of dynamically changing conditions. Transitioning demonstrations to this type of environment requires an entirely new set of conceptual and computational techniques and tools. The proposed work will move a class of underactuated mechanical systems, in particular locomoting mechanisms, out of the laboratory and into the real world through new and fundamental contributions to integrated planning and control. The result will be a system that can execute complex motions in cluttered and complicated environments. In fact, the proposed work will constitute fundamental advances in the quest to enable robotic locomotion through environments inhospitable to wheeled vehicles, e.g., confined spaces such as collapsed buildings. These environments inspire other types of mobile mechanisms such as snake robots.

Puppet Choreography and Automated Marionettes
Sponsor: National Science Foundation
Duration: May. 2008 - Apr. 2012

Project description: Puppet choreography is a highly-developed language for controlling mechanically complex marionettes. It has evolved over centuries into a largely standardized form that allows puppeteers to address issues that arise as a result of the complex systems with which they are working. As such, the standardization of the choreography can be thought of as the puppeteers. response to complexity. The proposed work will focus on understanding how puppeteers address complex tasks in their choreographic descriptions of plays and using that understanding to solve questions of importance to computer science and engineering. These goals will be achieved by creating an automated puppet play, which will use insights about puppet choreography to implement embedded control of mechanically complex marionettes engaged in complex coordination tasks. In order to automate a puppet play, there are three key technical hurdles that must be addressed: real-time embedded motion control; strategically handling the complexity associated with coordination of high degree-of-freedom systems; and computer animation and simulation.

SRS: Abstraction-Based Motion Programs for Complex Mechanical Systems
Sponsor: National Science Foundation
Duration: Aug. 2008 - July 2011

Project description: This research project aims at the development of a systematic approach to abstraction-based motion control of complex, physical systems. In particular, it aims at understanding how high-level motion program languages can be made to form a basis for an effective software system for complex, interconnected mechanical systems. For this, novel research will be conducted along the following directions: (1) Motion Description Languages: We will discuss how to construct adequate motion description languages (MDLs) as well as propose a software engine for parsing and compiling such languages based on optimal control techniques. (2) Abstraction-Based Models of Interconnected Systems: A novel, graph-based representation of mechanical systems will be proposed that allows for a compact representation of mechanical systems for simulation and analysis, making it a key component of the proposed software engine. (3) Motion Primitives from Empirical Data: In order to ensure the expressiveness of the MDLs, it is of paramount importance that the motion primitives are sufficiently rich. As such, we propose a method for automatically obtaining such primitives from example data, e.g. generated by human operators.

ZORRO: The Robotic Fencing System
Sponsor: Robotics and Intelligent Machines at Georgia Tech
Duration: Aug. 2009 - June 2010

Project description: In order to be able to deploy truly safe service and flexible automation robots, robots must be able to interact with humans in effective and meaningful ways. This includes the presence of positive interactions (solving collaborative tasks) and the absence of negative interactions (avoiding human-robot collisions). We propose to examine both of these aspects under a single umbrella, namely human-robot fencing. Although this might sound a bit unusual, it is an application that presents clear challenges for dynamic interactions between robots and humans: (1) Prediction of human intentions and (2) Real-time robot response. These goals will be accomplished with online estimation and hybrid control.

Reconfigurable Sensor Networks for Fault-Tolerant In-Situ Sampling
Sponsor: NASA
Duration: Sep. 2006 - Aug. 2009

Project description: The goal of this project is to develop and validate the core technologies needed to enable reconfigurable sensor networks for fault-tolerant in-situ sampling for Earth science applications. The key technologies, which build on prior work done by the proposers, focus on science-driven sensor network diagnosis and topological reconfiguration of sensor networks. Control of reconfigurable sensor networks is fundamentally a difficult problem in which the system must balance issues of power usage, communication versus control, the effectiveness of adapting to the environment as well as to changing science requirements. These issues generally arise due to the limited perception, precision, and range constraints on communication channels that comprise the network. Diagnosis involves identifying and communicating necessary changes in network topology required to achieve science goals and compensate for sensor failure or communication dropouts. Reconfiguration involves physically reconfiguring the network topology based on input from the diagnostic process, in effect establishing a self-adapting sensor network. The novelty of our approach is on the focus of a decentralized versus centralized method of control in which interactions between sensor nodes are modeled topographically and manipulated locally to produce desired global behavior. These technologies will be integrated and demonstrated using a network of mobile sensors applied to a representative Earth science investigation.

CSR-EHS: Optimal, Multi-Modal Control of Complex Systems
Sponsor: National Science Foundation
Duration: Aug. 2005 - Aug. 2009

Project description: The technological frontier and performance barrier for control and management of many present-day engineering systems lie in their complexity. This complexity is often due to the high dimensionality and distributed nature of the systems (e.g., manufacturing and transportation systems), the absence of adequate models of the environment in which they operate (autonomous mobile robots), or the vast amounts of data required for their control (airport security systems). Consequently, complexity management has become an essential part of control systems design. An emerging approach to controlling complex systems consists of a decomposition of the control actions into a sequence of modes, each of which is defined for a particular task, operating point, or data source. This results in a hierarchical control structure with an event-driven supervisory control at the higher layer and a time-driven control at the lower layer. Whereas the time-driven feedback laws can be designed by standard control-engineering techniques, the problem of optimally designing the supervisory controllers is by-and-large open due to its inherent high complexity. The central question is how to schedule the various modes in order to optimize the system's performance. Related questions concern the development of real-time algorithms for performance improvement (since optimality may be unrealistic in real time), and the tradeoff between the size of the mode set (complexity) and the system's performance (expressiveness).

Decentralized Algorithms for Locally Interacting Mobile Robots
Sponsor: US Army Research Office
Duration: Aug. 2005 - Nov. 2008

Project description: The overall objective of this research is to develop communication and control strategies for teams of multiple mobile robots whose knowledge about the environment is constrained. These constraints are generated both from limited perception capabilities, as well as precision and range limitations on the inter-robot communication channels. In particular, it will be investigated how changes in the environment can be dealt with in a decentralized manner through local rules in a predictable manner. This endeavor is driven by a unifying theme in which a number of questions are formulated and solved, ranging from "What global mode of operation is most suited to the current situation?" to "How can this be realized in a systematic manner in the absence of global information?". The effort is mainly focused on the following key areas: 1) Autonomous Formation Selection. 2) Control and Communication Strategies for Local Interactions.

LAGR: Learning Perception, Controllers and Visual Feature Graphs for Ground Robots
Sponsor: DARPA
Duration: Oct. 2004 - Jan. 2008

Project description: Ground robots should learn from their own experience, and learn from human example. Our research will exploit both opportunities for learning using novel algorithms to learn perception and control. As a baseline framework, we will build a reliable, modular system using well understood perception and control components. Novel learning algorithms will be integrated using a "plug-in" approach to facilitate comparative evaluation. Our main innovative ideas include: 1) Statistical classification and grouping of 3D obstacles and terrain from monocular images. 2) Learning and using visual feature graphs for navigation. 3) Learning controllers from example and experience. 4) Learning statistical models of 3D terrain types and obstacle types.

CAREER: Linguistic Control of Mobile Robots
Sponsor: National Science Foundation
Duration: Feb. 2003 - Jan. 2008

Project description: When humans instruct each other how to carry out particular tasks, only a limited number of tokenized (linguistic) instructions are used.  In contrast to this, classic control theory specifies a control action to be carried out at each time instant. But, in a number of applications, such as semi-autonomous service robots for industrial and domestic use, intelligent appliances, and communication constrained embedded and/or teleoperated devices, the control procedures have a natural, linguistic flavor. The interpretation here is that linguistic control instructions specify particular modes of operation rather than explicit control values. And, the overall objective of this research is to develop and disseminate rational methods for understanding how continuous devices, such as mobile robots, should be controlled using computer generated, linguistic inputs. In particular, it will be studied how these instructions should be defined, selected, and coded in order to minimize the number of bits transmitted from the computer to the robot, while guaranteeing that the system meets its specifications. For this, an information theoretic approach to control theory will be developed, serving as a useful tool not only for source coding of control signals, but also for describing how symbolic instructions should be interpreted and operated on by the continuous systems. Questions concerning what sensors and actuators to use in a given robotic application can be addressed quite elegantly within this framework as well.

Sting Racing: DARPA Urban Grand Challenge
Sponsor: GT CoC, GT ECE, GTRI, SAIC, Telcordia, Duke, Sun Microsystems, Mobile Intelligence
Duration: Jul. 2006 - Nov. 2007

Project description: DARPA's Urban Challenge tests the ability of competing autonomous robots to drive 60 miles in an urban setting in six hours or less. The vehicles must obey the rules of the road, and safely interact with other robot vehicles and other cars driven by people on the course. The competition is scheduled for November 3, 2007. Cash prizes will be awarded to the top three finishers, including $2,000,000 for first place. To win, the driverless robots must have the ability to sense and react to urban traffic environment, including: lane markings, intersections, other vehicles, unexpected road blockages. The robot vehicles must also have the intelligence to select the best route, and decide which vehicle has the right-of-way in normal traffic.

What are the X's Doing?
Sponsor: Robotics and Intelligent Machines at Georgia Tech
Duration: Aug. 2006 - Jun. 2007

Project description: Given empirical examples, obtained for example from biological data or remotely controlled mechanisms, how can mobile robots learn what behaviors to use and when to use them, in order to faithfully reproduce, as well as generalize, the example behaviors? In fact, a basic assumption underlying much of the research in behavior-based robotics is that the modularization of the navigation system into atomic building blocks, or behaviors, is intrinsically sound. In other words, not only does this modularization allow for conceptual complexity reductions, but it is present also in naturally occurring systems. For instance, there is ample evidence that biological navigation systems are modularized as well, in that more complex animal (and human) behaviors can be composed from atomic behaviors. Through this project, we intend to make this assumption explicit by generating atomic control laws, or behaviors, from biological data. Moreover, conditions for when to trigger transitions from one behavior to another will be obtained in an automatic fashion, with the result of producing high-level hybrid automata, containing both continuous dynamics (different behaviors) and discrete logic (transition rules), from the example data.

Communications in Embedded Control Systems
Sponsor: National Science Foundation
Duration: Aug. 2002 - Jul. 2005

Project description: Information sharing in embedded, decentralized control systems is the general area of research that will be pursued in this project. In particular it will be investigated how many bits of information need to be transmitted between different embedded software components in order to make the physical system meet its specifications. For instance, autonomous robotic systems are normally relaying on a variety of heterogeneous sensors, and the question then becomes, which of the sensors do we need?And furthermore, is it possible to compress the data in a systematic way, e.g. using virtual sensors, so that information that is not essential to the current task, or mode of operation, can be discarded? Since the sensory data needed for accomplishing a certain task depends both on the control law, the dynamics of the system, and the complexity of the task, the use of information theoretic tools for embedded, multi-modal control design gives us the means to investigate questions concerning sensor and actuator selection and control mode design in a unified way. The goal of this project is in particular to carry out the following four programs: 1) Model the information theoretic content of the symbolic, computer generated inputs used for controlling continuous, mechanical devises. 2) Select embedded sensors and actuators that make the control system meet its specifications in the presence of bandwidth constraints. 3) Develop coding strategies for compressing the sensory information based on what control application is to be carried out. 4) Apply the theory to the control of multiple autonomous, intelligent robots.

Hybrid, Data-Driven Control in Electronics Manufacturing
Sponsor: Georgia Tech Manufacturing Research Center
Duration: Jun. 2002 - May. 2004

Project description: Closed-loop control of the CBAR SMT-process is an effort that has been sustained over a number of years. This research defines a natural continuation of that work, and the control research aims at the development of robust, data-driven, adaptive, closed-loop control algorithms for the SMT-process. In particular, by closing the loop over the system, the available measurements can be used for adaptively changing the machine parameters in order to achieve good performance on a board-to-board basis. This is an important objective since the dynamics governing the stencil printer is different depending on the direction of the squeegee, which stresses the fact that the machine parameters have to change in between individual boards. Our current research in this area focuses on the generation of provenly convergent iterative algorithms for generating optimal machine parameter sequences in the presence of high noise levels, unknown machine dynamics, and aggressive transient responses. In fact, locally defined closed-loop controllers have been developed, with promising results from both an experimental and a theoretical point of view, that change the squeegee speed and pressure in response to varying solder brick heights.