Robotics Seminar @ Illinois

The Illinois Robotics Group is proud to host the Robotics Seminar @ Illinois Series.  These seminars provide a diverse lineup of speakers reflecting the interdisciplinary nature of the field of robotics.

We are hosting speakers from various departments on campus conducting research in the field of robotics at Illinois.  The talks are given by both professors and students each week, with occasional demonstrations afterwards in the Intelligent Robotics Lab.

Talks are held  at 1pm on Friday’s virtually through Zoom but may continue in the CSL Studio conference room (1232) in the future, which is just west of the Intelligent Robotics Lab facilities.

Talks are Resuming Remotely this Semester 

Please Feel Free to Recommend Speakers for Future Talks

If you have comments or questions on the IRG Seminar Series, please feel free to contact John M. Hart, jmhart3@illinois.edu, Manager and Laboratory Coordinator of  the CSL  Shared Robotics Laboratories


04/29/22 – Guest Talk

Title: Distributed Perception and Learning Between Robots and the Cloud

Speaker: Dr. Sandeep Chinchali, University of Texas, Austin

Abstract: Augmenting robotic intelligence with cloud connectivity is considered one of the most promising solutions to cope with growing volumes of rich robotic sensory data and increasingly complex perception and decision-making tasks. While the benefits of cloud robotics have been envisioned long before, there is still a lack of flexible methods to trade-off the benefits of cloud computing with end-to-end systems costs of network delay, cloud storage, human annotation time, and cloud-computing time. To address this need, I will introduce decision-theoretic algorithms that allow robots to significantly transcend their on-board perception capabilities by using cloud computing, but in a low-cost, fault-tolerant manner. The utility of these algorithms will be demonstrated on months of field data and experiments on state-of-the-art embedded deep learning hardware.

Specifically, for compute-and-power-limited robots, I will present a lightweight model selection algorithm that learns when a robot should exploit low-latency on-board computation, or, when highly uncertain, query a more accurate cloud model. Then, I will present a collaborative learning algorithm that allows a diversity of robots to mine their real-time sensory streams for valuable training examples to send to the cloud for model improvement. I will conclude this talk by describing my group’s research efforts to co-design the representation of rich robotic sensory data with networked inference and control tasks for concise, task-relevant representations.

Bio: Sandeep Chinchali is an assistant professor in UT Austin’s ECE department and Robotics Consortium. He completed his PhD in computer science at Stanford, working on distributed perception and learning between robots and the cloud. Previously, he was the first principal data scientist at Uhana Inc. (acquired by VMWare), a Stanford startup working on data-driven optimization of cellular networks. Prior to Stanford, he graduated from Caltech, where he worked on robotics at NASA’s Jet Propulsion Lab (JPL). His paper on cloud robotics was a finalist for best student paper at Robotics: Science and Systems and his research has been funded by Cisco, NSF, the Office of Naval Research, and Lockheed Martin.

04/22/22 – Guest Talk

Title: Learning to Walk via Rapid Adaptation

Speaker: Ashish Kumar, Ph.D Student University of California, Berkeley

Abstract:Legged locomotion is commonly studied and programmed as a
discrete set of structured gait patterns, like walk, trot, gallop. However,
studies of children learning to walk (Adolph et al) show that real-world
locomotion is often quite unstructured and more like “bouts of intermittent
steps”. We have developed a general approach to walking which is built on
learning on varied terrains in simulation and then fast online adaptation
(fractions of a second) in the real world. This is made possible by our
Rapid Motor Adaptation (RMA) algorithm. RMA consists of two components: a
base policy and an adaptation module, both of which can be trained in
simulation. We thus learn walking policies that are much more flexible and
adaptable. In our set-up gaits emerge as a consequence of minimizing energy
consumption at different target speeds, consistent with various animal
motor studies.

You can see our robot walking at here
The project page is: here

04/15/22 – Guest Talk

Title: Trust in Multi-Robot Systems and Achieving Resilient Coordination

Speaker: Dr. Stephanie Gill, Harvard University

Abstract:Our understanding of multi-robot coordination and control has experienced great advances to the point where deploying multi-robot systems in the near future seems to be a feasible reality. However, many of these algorithms are vulnerable to non-cooperation and/or malicious attacks that limit their practicality in real-world settings. An example is the consensus problem where classical results hold that agreement cannot be reached when malicious agents make up more than half of the network connectivity; this quickly leads to limitations in the practicality of many multi-robot coordination tasks. However, with the growing prevalence of cyber-physical systems comes novel opportunities for detecting attacks by using cross-validation with physical channels of information. In this talk we consider the class of problems where the probability of a particular (i,j) link being trustworthy is available as a random variable. We refer to these as “stochastic observations of trust.” We show that under this model, strong performance guarantees such as convergence for the consensus problem can be recovered, even in the case where the number of malicious agents is greater than ½ of the network connectivity and consensus would otherwise fail. Moreover, under this model we can reason about the deviation from the nominal (no attack) consensus value and the rate of achieving consensus. Finally, we make the case for the importance of deriving such stochastic observations of trust for cyber-physical systems and we demonstrate one such example for the Sybil Attack that uses wireless communication channels to arrive at the desired observations of trust. In this way our results demonstrate the promise of exploiting trust in multi-robot systems to provide a novel perspective on achieving resilient coordination in multi-robot systems.

Speaker’s Bio: Stephanie is an Assistant Professor in the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University. Her work centers around trust and coordination in multi-robot systems for which she has received the Office of Naval Research Young Investigator award (2021) and the National Science Foundation CAREER award (2019). She has also been selected as a 2020 Sloan Research Fellow for her contributions at the intersection of robotics and communication. She has held a Visiting Assistant Professor position at Stanford University during the summer of 2019, and an Assistant Professorship at Arizona State University from 2018-2020. She completed her Ph.D. work (2014) on multi-robot coordination and control and her M.S. work (2009) on system identification and model learning. At MIT she collaborated extensively with the wireless communications group NetMIT, the result of which were two U.S. patents recently awarded in adaptive heterogeneous networks for multi-robot systems and accurate indoor positioning using Wi-Fi. She completed her B.S. at Cornell University in 2006.

04/08/22 – Student Talks

Title: Underwater Vehicle Navigation and Pipeline Inspection using Fuzzy Logic

Speaker: I-Chen Sang, University of Illinois at Urbana-Champaign

Abstract:Underwater pipeline inspection is becoming a crucial topic in the off-shore subsea inspection industry. ROVs (Remotely Operated Vehicle) can play an important role in various fields like military, ocean science, aquaculture, shipping, and energy. However, using ROVs for inspection is not cost effective and the fixed leak detection sensors mounted along the pipeline have limited precision. Therefore, we proposed a navigation system using AUV (Autonomous Underwater Vehicle) to increase the position resolution of leak detection and lower the inspection cost. In a ROS/Gazebo-based simulation environment, we navigated the AUV with a fuzzy controller which takes navigation errors derived from both camera and sonar sensors as input. When released away from the pipeline, the AUV has the ability to navigate towards the pipeline and finally cruise along it. Additionally, with a chemical concentration sensor mounted on the AUV, it showed the capability to complete pipeline inspection and report the leak point.

Speaker’s Bio:I am a Ph.D. student at the Department of Industrial and Systems Engineering and started to work in AUVSL in Jan 2021. I hold a B.Sc. and M.Sc. degree in physics and 5-year working experience in defense industry before joining U of I. My current concentration is in systems design and manufacturing. The focus of my research is on the perception algorithm development of autonomous vehicles. I am currently working on ground vehicle lane detection using adaptive thresholding algorithms.

Title: A CNN Based Vision-Proprioception Fusion Method for Robust UGV Terrain Classification

Speaker: Yu Chen, University of Illinois at Urbana-Champaign

Abstract: The ability for ground vehicles to identify terrain types and characteristics can help provide more accurate localization and information-rich mapping solutions. Previous studies have shown the possibility of classifying terrain types based on proprioceptive sensors that monitor wheel-terrain interactions. However, most methods only work well when very strict motion restrictions are imposed including driving in a straight path with constant speed, making them difficult to be deployed on real-world field robotic missions. To lift this restriction, this paper proposes a fast, compact, and motion-robust, proprioception-based terrain classification method. This method uses common on-board UGV sensors and a 1D Convolutional Neural Network (CNN) model. The accuracy of this model was further improved by fusing it with a vision-based CNN that made classification based on the appearance of terrain. Experimental results indicated the final fusion models were highly robust with strong performance, with over 93% accuracy, under various lighting conditions and motion maneuvers.

Yu Chen‘s Bio: I am a Ph.D. student in U of I who majors in Mechanical Science and Engineering and works under the leadership of Prof. William Robert Norris and Prof. Elizabeth T Hsiao-Wecksler. I have gained my fair share of knowledge in manufacturing, mechanical design, and structural analysis during my undergraduate days in Michigan State. For my graduate studies, I am focusing my interest and energy on studying robot perception and dynamic control. Currently, I am working on developing efficient CNN fusion models to help robots gain higher accuracy and robustness when classifying terrain types and detecting obstacles.

 

04/01/22 – Guest Talk

Title: Bridging the Gap Between Safety and Real-Time Performance during Trajectory Optimization: Reachability-based Trajectory Design

Speakers: Ram Vasudevan, University of Michigan

Abstract:Autonomous systems offer the promise of providing greater safety and access. However, this positive impact will only be achieved if the underlying algorithms that control such systems can be certified to behave robustly. This talk describes a technique called Reachability-based Trajectory Design, which constructs a parameterized representation of the forward reachable set that it then uses in concert with predictions to enable real-time, certified, collision checking. This approach, which is guaranteed to generate not-at-fault behavior, is demonstrated across a variety of different real-world platforms including ground vehicles, manipulators, and walking robots.

Speaker’s Bio:Ram Vasudevan is an assistant professor in Mechanical Engineering and the Robotics Institute at the University of Michigan. He received a BS in Electrical Engineering and Computer Sciences, an MS degree in Electrical Engineering, and a PhD in Electrical Engineering all from the University of California, Berkeley. He is a recipient of the NSF CAREER Award, the ONR Young Investigator Award, and the 1938E Award . His work has received best paper awards at the IEEE Conference on Robotics and Automation, the ASME Dynamics Systems and Controls Conference, and IEEE OCEANS Conference and has been finalist for best paper at Robotics: Science and Systems.

03/25/22 – Guest Talk

Title: Toward the Development of Highly Adaptive Legged Robots

Speakers: Quan Nguyen, University of Southern California

Abstract:Deploying legged robots in real-world applications will require fast adaptation to unknown terrain and model uncertainty. Model uncertainty could come from unknown robot dynamics, external disturbances, interaction with other humans or robots, or unknown parameters of contact models or terrain properties. In this talk, I will first present our recent works on adaptive control and adaptive safety-critical control for legged locomotion adapting to substantial model uncertainty. In these results, we focus on the application of legged robots walking rough terrain while carrying a heavy load. I will then talk about our solution on trajectory optimization that allows legged robots to adapt to a wide variety of challenging terrain. This talk will also discuss the combination of control, trajectory optimization and reinforcement learning toward achieving long-term adaptation in both control actions and trajectory planning for legged robots.

Speaker’s Bio: Quan Nguyen is an Assistant Professor of Aerospace and Mechanical Engineering at the University of Southern California. Prior to joining USC, he was a Postdoctoral Associate in the Biomimetic Robotics Lab at the Massachusetts Institute of Technology (MIT). He received his Ph.D. from Carnegie Mellon University (CMU) in 2017 with the Best Dissertation Award.

His research interests span different control and optimization approaches for highly dynamic robotics including nonlinear control, trajectory optimization, real-time optimization-based control, robust and adaptive control. His work on the bipedal robot ATRIAS walking on stepping stones was featured on the IEEE Spectrum, TechCrunch, TechXplore and Digital Trends. His work on the MIT Cheetah 3 robot leaping on a desk was featured widely in many major media channels, including CNN, BBC, NBC, ABC, etc. Nguyen won the Best Presentation of the Session at the 2016 American Control Conference (ACC) and the Best System Paper Finalist at the 2017 Robotics: Science & Systems conference (RSS). Nguyen is a recipient of the 2020 Charles Lee Powell Foundation Faculty Research Award.

03/11/22 – Guest Talk

Title: Developing and Deploying Situational Awareness in Autonomous Robotic Systems

Speakers: Philip Dames, Temple University

Abstract: Robotic systems must possess sufficient situational awareness in order to successfully operate in complex and dynamic real-world environments, meaning they must be able to perceive objects in their surroundings, comprehend their meaning, and predict the future state of the environment. In this talk, I will first describe how multi-target tracking (MTT) algorithms can provide mobile robots with this awareness, including our recent results that extend classical MTT approaches to include semantic object labels. Next, I will discuss two key applications of MTT to mobile robotics. The first problem is distributed target search and tracking. To solve this, we develop a distributed MTT framework, allowing robots to estimate, in real time, the relative importance of each portion of the environment, and dynamic tessellation schemes, which account for uncertainty in the pose of each robot, provide collision avoidance, and automatically balance task assignment in a heterogeneous team. The second problem is autonomous navigation through crowded, dynamic environments. To solve this, we develop a novel neural network-based control policy that takes as its input the target tracks from an MTT, unlike previous approaches which only rely on raw sensor data. We show that our policy, trained entirely in one simulated environment, generalizes well to new situations, including a real-world robot.

Speaker’s Bio:Philip Dames is an Assistant Professor of Mechanical Engineering at Temple University, where he directs the Temple Robotics and Artificial Intelligence Lab (TRAIL). Prior to joining Temple, he was a Postdoctoral Researcher in Electrical and Systems Engineering at the University of Pennsylvania. He received his PhD Mechanical Engineering and Applied Mechanics from the University of Pennsylvania in 2015 and his BS and MS degrees in Mechanical Engineering from Northwestern University in 2010.

Title: Pedestrian trajectory prediction meets social robot navigation

Abstract: Multi-pedestrian trajectory prediction is an indispensable element of safe and socially aware robot navigation among crowded spaces. Previous works assume that positions of all pedestrians are consistently tracked, which leads to biased interaction modeling if the robot has a limited sensor range and can only partially observe the pedestrians. We propose Gumbel Social Transformer, in which an Edge Gumbel Selector samples a sparse interaction graph of partially detected pedestrians at each time step. A Node Transformer Encoder and a Masked LSTM encode pedestrian features with sampled sparse graphs to predict trajectories. We demonstrate that our model overcomes potential problems caused by the assumptions, and our approach outperforms related works in trajectory prediction benchmarks.

Then, we redefine the personal zones of walking pedestrians with their future trajectories. To learn socially aware robot navigation policies, the predicted social zones are incorporated into a reinforcement learning framework to prevent the robot from intruding into the social zones. We propose a novel recurrent graph neural network with attention mechanisms to capture the interactions among agents through space and time. We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios in simulation and real world.

Speakers’ Bios:  Shuijing Liu is a fourth year PhD student in Human-Centered Autonomy Lab in Electrical and Computer Engineering, University of Illinois at Urbana Champaign, advised by Professor Katherine Driggs-Campbell. Her research interests include learning-based robotics and human robot interaction. Her research primarily focuses on autonomous navigation in crowded and interactive environments using reinforcement learning.

Zhe Huang is a third year PhD student in Human-Centered Autonomy Lab in Electrical and Computer Engineering, University of Illinois at Urbana Champaign, advised by Professor Katherine Driggs-Campbell. His research is focused on pedestrian trajectory prediction and collaborative manufacturing.

03/04/22 – Student Talks

Title: GRILC: Gradient-based Reprogrammable Iterative Learning Control for Autonomous Systems 

Speakers:  Kuan-Yu Tseng , University of Illinois at Urbana-Champaign

Abstract: We propose a novel gradient-based reprogrammable iterative learning control (GRILC) framework for autonomous systems. Performance of trajectory following in autonomous systems is often limited by mismatch between a complex actual model and a simplified nominal model used in controller design. To overcome this issue, we develop the GRILC framework with offline optimization using the information of the nominal model and the actual trajectory, and online system implementation. In addition, a partial and reprogrammable learning strategy is introduced. The proposed method is applied to the autonomous time-trialing example and the learned control policies can be stored into a library for future motion planning. The simulation and experimental results illustrate the effectiveness and robustness of the proposed approach.

Speaker’s Bio: Kuan-Yu Tseng is a third-year Ph.D. student in Mechanical Engineering at UIUC, advised by Prof. Geir Dullerud. He received M.S. and B.S. degrees in Mechanical Engineering from National Taiwan University in 2019 and 2017, respectively. His research interests include control and motion planning in autonomous vehicles and robots.

Title: Pedestrian trajectory prediction meets social robot navigation

Abstract: Multi-pedestrian trajectory prediction is an indispensable element of safe and socially aware robot navigation among crowded spaces. Previous works assume that positions of all pedestrians are consistently tracked, which leads to biased interaction modeling if the robot has a limited sensor range and can only partially observe the pedestrians. We propose Gumbel Social Transformer, in which an Edge Gumbel Selector samples a sparse interaction graph of partially detected pedestrians at each time step. A Node Transformer Encoder and a Masked LSTM encode pedestrian features with sampled sparse graphs to predict trajectories. We demonstrate that our model overcomes potential problems caused by the assumptions, and our approach outperforms related works in trajectory prediction benchmarks.

Then, we redefine the personal zones of walking pedestrians with their future trajectories. To learn socially aware robot navigation policies, the predicted social zones are incorporated into a reinforcement learning framework to prevent the robot from intruding into the social zones. We propose a novel recurrent graph neural network with attention mechanisms to capture the interactions among agents through space and time. We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios in simulation and real world.

Speakers’ Bios:  Shuijing Liu is a fourth year PhD student in Human-Centered Autonomy Lab in Electrical and Computer Engineering, University of Illinois at Urbana Champaign, advised by Professor Katherine Driggs-Campbell. Her research interests include learning-based robotics and human robot interaction. Her research primarily focuses on autonomous navigation in crowded and interactive environments using reinforcement learning.

Zhe Huang is a third year PhD student in Human-Centered Autonomy Lab in Electrical and Computer Engineering, University of Illinois at Urbana Champaign, advised by Professor Katherine Driggs-Campbell. His research is focused on pedestrian trajectory prediction and collaborative manufacturing.

02/25/22 – Guest Talk

Title: Safety and Generalization Guarantees for Learning-Based Control of Robots

Speakers:  Professor Matthew Spenko , Illlinois Institute of Technology

Abstract: For the past fifteen years, the RoboticsLab@IIT has focused on creating technologies to enable mobility in challenging environments. This talk highlights the lab’s contributions, from its work in gecko-inspired climbing and perching robots to the evaluation of navigation safety in self-driving cars, drone technology for tree science, and the development of amoeba-like soft robots. The latter of these, soft robots, will compose the majority of the talk. Soft robots can offer many advantages over traditional rigid robots including conformability to different object geometries, shape changing, safer physical interaction with humans, the ability to handle delicate objects, and grasping without the need for high-precision control algorithms. Despite these advantages, soft robots often lack high force capacity, scalability, responsive locomotion and object handling, and a self-contained untethered design, all of which have hindered their adoption. To address these issues, we have developed a series of robots comprised of several rigid robotic subunits that are flexibly connected to each other and contain a granule-filled interior that enables a jamming transition from soft to rigid. The jamming feature allows the robots to exert relatively large forces on objects in the environment. The modular design resolves any scalability issues, and using decentralized robotic subunits allows the robot to configure itself in a variety of shapes and conform to objects, all while locomoting. The result is a compliant, high-degree-of-freedom system with excellent morphability.

Speaker’s Bio: Matthew Spenko is a professor in the Mechanical, Materials, and Aerospace Engineering Department at the Illinois Institute of Technology. Prof. Spenko earned the B.S. degree cum laude in Mechanical Engineering from Northwestern University in 1999 and the M.S. and Ph.D. degrees in Mechanical Engineering from Massachusetts Institute of Technology in 2001 and 2005 respectively. He was an Intelligence Community Postdoctoral Scholar in the Mechanical Engineering Department’s Center for Design Research at Stanford University from 2005 to 2007. He has been a faculty member at the Illinois Institute of Technology since 2007, received tenure in 2013, and was promoted to full professor in 2019. His research is in the general area of robotics with specific attention to mobility in challenging environments. Prof. Spenko is a senior member of IEEE and an associate editor of Field Robotics. His work has been featured in popular media such as the New York Times, CNET, Engadget, and Discovery-News. Examples of his robots are on permanent display in Chicago’s Museum of Science and Industry.

02/18/22 – Guest Talk

Title: Safety and Generalization Guarantees for Learning-Based Control of Robots

Speakers: Prof. Anirudha Majumdar , Assistant Professor, Princeton University

Abstract: The ability of machine learning techniques to process rich sensory inputs such as vision makes them highly appealing for use in robotic systems (e.g., micro aerial vehicles and robotic manipulators). However, the increasing adoption of learning-based components in the robotics perception and control pipeline poses an important challenge: how can we guarantee the safety and performance of such systems? As an example, consider a micro aerial vehicle that learns to navigate using a thousand different obstacle environments or a robotic manipulator that learns to grasp using a million objects in a dataset. How likely are these systems to remain safe and perform well on a novel (i.e., previously unseen) environment or object? How can we learn control policies for robotic systems that provably generalize to environments that our robot has not previously encountered? Unfortunately, existing approaches either do not provide such guarantees or do so only under very restrictive assumptions.

In this talk, I will present our group’s work on developing a principled theoretical and algorithmic framework for learning control policies for robotic systems with formal guarantees on generalization to novel environments. The key technical insight is to leverage and extend powerful techniques from generalization theory in theoretical machine learning. We apply our techniques on problems including vision-based navigation and grasping in order to demonstrate the ability to provide strong generalization guarantees on robotic systems with complicated (e.g., nonlinear/hybrid) dynamics, rich sensory inputs (e.g., RGB-D), and neural network-based control policies.

Speaker’s Bio: Anirudha Majumdar is an Assistant Professor at Princeton University in the Mechanical and Aerospace Engineering (MAE) department, and an Associated Faculty in the Computer Science department. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2016, and a B.S.E. in Mechanical Engineering and Mathematics from the University of Pennsylvania in 2011. Subsequently, he was a postdoctoral scholar at Stanford University from 2016 to 2017 at the Autonomous Systems Lab in the Aeronautics and Astronautics department. He is a recipient of the NSF CAREER award, the Google Faculty Research Award (twice), the Amazon Research Award (twice), the Young Faculty Researcher Award from the Toyota Research Institute, the Best Conference Paper Award at the International Conference on Robotics and Automation (ICRA), the Paper of the Year Award from the International Journal of Robotics Research (IJRR), the Alfred Rheinstein Faculty Award (Princeton), and the Excellence in Teaching Award from Princeton’s School of Engineering and Applied Science.

Schedule: The seminar will be held every Friday at 1:00 PM CST starting 2/11.

Location: This semester, we will meet only virtually.

02/11/22 – Guest Talk

Title: Numerical Methods for Things That Move

Speakers: Zac Manchester , Assistant Professor, Carnegie Mellon University

Abstract: Recent advances in motion planning and control have led to dramatic successes like SpaceX’s autonomous rocket landings and Boston Dynamics’ humanoid robot acrobatics. However, the underlying numerical methods used in these applications are typically decades old, not tuned for high performance on planning and control problems, and are often unable to cope with the types of optimization problems that arise naturally in modern robotics applications like legged locomotion and autonomous driving. This talk will introduce new numerical optimization tools built to enable robotic systems that move with the same agility, efficiency, and safety as humans and animals. Some target applications include legged locomotion; autonomous driving; distributed control of satellite swarms; and spacecraft entry, descent, and landing. I will also discuss hardware platforms that we have deployed algorithms on, including quadrupeds, teams of quadrotors, and tiny satellites.

Speaker’s Bio: Zac Manchester is an Assistant Professor of Robotics at Carnegie Mellon University, founder of the KickSat project, and member of the Breakthrough Starshot Advisory Committee. He holds a Ph.D. in aerospace engineering and a B.S. in applied physics from Cornell University. Zac was a postdoc in the Agile Robotics Lab at Harvard University and previously worked at Stanford, NASA Ames Research Center and Analytical Graphics, Inc. He received a NASA Early Career Faculty Award in 2018 and has led three satellite missions. His research interests include motion planning, control, and numerical optimization, particularly with application to robotic locomotion and spacecraft guidance, navigation, and control.

Schedule: The seminar will be held every Friday at 1:00 PM CST starting 2/11.

Location: This semester, we will meet only virtually.

12/10/21 – Guest Talk

Title: Towards a Universal Modeling and Control Framework for Soft Robots

Speakers: Daniel Bruder , Harvard

Abstract: Soft robots have been an active area of research in the robotics community due to their inherent compliance and ability to safely interact with delicate objects and the environment. Despite their suitability for tasks involving physical human-robot interaction, their real-world applications have been limited due to the difficulty
involved in modeling and controlling soft robotic systems. In this talk, I’ll describe two modeling approaches aimed at overcoming the limitations of previous methods. The first is a physics-based approach for fluid-driven actuators that offers predictions in terms of tunable geometrical parameters, making it a valuable tool in the design of soft fluid-driven robotic systems. The second is a data-driven approach that leverages Koopman operator theory to construct models that are linear, which enables the utilization of linear control techniques for nonlinear dynamical systems like soft robots. Using this Koopman-based approach, a pneumatically actuated soft continuum manipulator was able to autonomously perform manipulation tasks such as trajectory following and pick-and-place with a variable payload without undergoing any task-specific training. In the future, these
approaches could offer a paradigm for designing and controlling all soft robotic systems, leading to their more widespread adoption in real-world applications.

Bios: Daniel Bruder received a B.S. degree in engineering sciences from Harvard University in 2013, and a Ph.D. degree in mechanical engineering from the University of Michigan in 2020. He is currently a postdoctoral researcher in the Harvard Microrobotics Lab. He is a recipient of the NSF Graduate Research Fellowship and the Richard and Eleanor Towner Prize for Outstanding Ph.D. Research. His research interests include the design, modeling, and control of robotic systems, especially soft robots.

11/19/21 – Guest Talk

Title: Value function-based methods for safety-critical control

Speakers: Jason Jangho Choi, University of California, Berkeley

Abstract: Many safety-critical control methods leverage on a value function that captures knowledge about how the safety constraint can be dynamically satisfied. These value functions appear in many different forms in various literature—for example, Hamilton-Jacobi Reachability, Control Barrier Functions, and Reinforcement Learning. The value functions are often computationally heavy, however, once they are computed offline, they can be used effectively fast for online applications. In the first part of my talk, I will share some recent progress in methods for constructing the value functions. Specifically, I would like to discuss how different notions of value functions can be merged into a unified concept, and will introduce a new dynamic programming principle that can effectively compute reachability value functions for hybrid systems like walking robots. In the second part, I will discuss the main issue when value functions computed offline are deployed in online safety control, model uncertainty, and how we can address this problem effectively with data-driven methods.

Bios: Jason Jangho Choi is a PhD student at the University of California, Berkeley, working with Professor Koushil Sreenath and Professor Claire Tomlin. He finished his undergraduate studies in mechanical engineering at Seoul National University. His research interests center on optimal control theories for nonlinear and hybrid systems, data-driven methods for safety control, and their applications to robotics mobility.

11/12/21 – Guest Talk

Title: Planning and Learning for Maneuvering Mobile Robots in Complex Environments

Speakers: Lantao Liu, Assistant Professor in the Department of Intelligent Systems Engineering at Indiana University-Bloomington

Abstract: In the first part of the talk, I will discuss our recent progress on the continuous-state Markov Decision Processes (MDPs) that can be utilized to address autonomous navigation and control in unstructured off-road environments. Our solution integrates a diffusion-type approximation to the robot stochastic transition model and a kernel-type approximation to the robot state values, so that the decision can be efficiently computed for real-time navigation. Results from unmanned ground vehicles demonstrate the applicability in challenging real-world environments. Then I will discuss the decision making with time-varying disturbances, the solution of which can navigate unmanned aerial vehicles disturbed by air turbulence or unmanned underwater vehicles disturbed by ocean currents.  We explore the time-varying stochasticity of robot motion and investigate robot state reachability, based on which we design an efficient iterative method that offers a good trade-off between solution optimality and time complexity. Finally, I will present an adaptive sampling (active learning) and informative planning framework for fast modeling (mapping) unknown environments such as large ocean floors or time-varying air/water pollution. We consider real-world constraints such as multiple mission objectives as well as environmental dynamics. Preliminary results from an unmanned surface vehicle also demonstrate high efficiency of the method.

Bios: Lantao Liu is an Assistant Professor in the Department of Intelligent Systems Engineering at Indiana University-Bloomington. His main research interests include robotics and artificial intelligence. He has been working on planning, learning, and coordination techniques for autonomous systems (air, ground, sea) involving single or multiple robots with potential applications in navigation and control, surveillance and security, search and rescue, smart transportation, as well as environmental monitoring. Before joining Indiana University, he was a Research Associate in the Department of Computer Science at the University of Southern California during 2015 – 2017. He also worked as a Postdoctoral Fellow in the Robotics Institute at Carnegie Mellon University during 2013 – 2015. He received a Ph.D. from the Department of Computer Science and Engineering at Texas A&M University in 2013, and a Bachelor’s degree from the Department of Automatic Control at Beijing Institute of Technology in 2007.

11/05/21 – Faculty Talks

Title: Resilience of Autonomous Systems: A Step Beyond Adaptation

Speakers: Melkior Ornik  , Assistant professor in the Department of Aerospace Engineering, UIUC

Abstract: The ability of a system to correctly respond to a sudden adverse event is critical for high-level autonomy in complex, changing, or remote environments. By assuming continuing structural knowledge about the system, classical methods of adaptive or robust control largely attempt to design control laws which enable the system to complete its original task even after an adverse event. However, catastrophic events such as physical system damage may simply render the original task impossible to complete. In that case, design of any control law that attempts to complete the task is doomed to be unsuccessful. Instead, the system should recognize the task as impossible to complete, propose an alternative that is certifiably completable given the current knowledge, and formulate a control law that drives the system to complete this new task. To do so, in this talk I will present the emergent twin frameworks of quantitative resilience and guaranteed reachability. Combining methods of optimal control, online learning, and reachability analysis, these frameworks first compute a set of temporal tasks completable by all systems consistent with the current partial knowledge, possibly within a time budget. These tasks can then be pursued by online learning and adaptation methods. The talk will consider three scenarios: actuator degradation, loss of control authority, and structural change in system dynamics, and will briefly present a number of applications to maritime and aerial vehicles as well as opinion dynamics. Finally, I will identify promising future directions of research, including real-time safety-assured mission planning, resilience of physical infrastructure, and perception-based task assignment.

Bios: Melkior Ornik is an assistant professor in the Department of Aerospace Engineering at the University of Illinois Urbana-Champaign, also affiliated with the Coordinated Science Laboratory, Department of Electrical and Computer Engineering and the Discovery Partners Institute. He received his Ph.D. degree from the University of Toronto in 2017. His research focuses on developing theory and algorithms for control, learning and task planning in autonomous systems that operate in uncertain, changing, or adversarial environments, as well as in scenarios where only limited knowledge of the system is available.

Title: Data-driven MPC: Applications and Tools

Speakers: William Edwards  , UIUC

Abstract: Many of the most exciting and challenging applications in robotics involve control of novel systems with unknown nonlinear dynamics.  When such systems are infeasible to model analytically or numerically, roboticists often turn to data-driven techniques for modelling and control.  This talk will cover two projects relating to this theme.  First, I will discuss an application of data-driven modelling and control to needle insertion in deep anterior lamellar keratoplasty (DALK), a challenging problem in surgical robotics.  Second, I will introduce a new software library, AutoMPC, created to automate the design of data-driven model predictive controllers and make state-of-the-art algorithms more accessible for a wide range of applications.

Bios: William Edwards is a third-year Computer Science PhD student in the Intelligent Motion Laboratory at UIUC, advised by Dr. Kris Hauser.  He received Bachelor’s degrees in Computer Science and Mathematics from the University of South Carolina in 2019.  His research interests include motion planning, dynamics learning, and optimization.

 

10/29/21 – Student Talks

Title: Semi-Infinite Programming’s Application in Robotics

Speakers: Mengchao Zhang  , UIUC

Abstract: In optimization theorysemi-infinite programming (SIP) is an optimization problem with a finite number of variables and an infinite number of constraints, or an infinite number of variables and a finite number of constraints. In this talk, I will introduce our work which uses SIP to solve the problems in the field of robotics.

In the semi-infinite program with complementarity constraints (SIPCC) work, we use SIP to address the problem that contact is an infinite phenomenon involving continuous regions of interaction. Our method enables a gripper to find a feasible pose to hold (non-)convex objects while ensuring force and torque balance. In the non-penetration iterative closest points for single-view multi-object 6D pose estimation work, we use SIP to solve the penetration between (non-)convex objects. Through introducing non-penetration constraints to the framework of iterative closest points (ICP), we improve the pose estimation result’s accuracy of deep neural network based method. Also, our method outperforms the best result on the IC-BIN dataset in the Benchmark for 6D Object Pose Estimation.

Bios: Mengchao Zhang is a Ph.D. student in the IML laboratory at UIUC. His research interest includes motion planning, manipulation, perception, and optimization.

Title: Data-driven MPC: Applications and Tools

Speakers: William Edwards  , UIUC

Abstract: Many of the most exciting and challenging applications in robotics involve control of novel systems with unknown nonlinear dynamics.  When such systems are infeasible to model analytically or numerically, roboticists often turn to data-driven techniques for modelling and control.  This talk will cover two projects relating to this theme.  First, I will discuss an application of data-driven modelling and control to needle insertion in deep anterior lamellar keratoplasty (DALK), a challenging problem in surgical robotics.  Second, I will introduce a new software library, AutoMPC, created to automate the design of data-driven model predictive controllers and make state-of-the-art algorithms more accessible for a wide range of applications.

Bios: William Edwards is a third-year Computer Science PhD student in the Intelligent Motion Laboratory at UIUC, advised by Dr. Kris Hauser.  He received Bachelor’s degrees in Computer Science and Mathematics from the University of South Carolina in 2019.  His research interests include motion planning, dynamics learning, and optimization.

 

10/08/21 – Faculty Talks

Title: Research at the RoboDesign Lab at UIUC

Speakers: Joas Ramos  , Assistant Professor at the UIUC

Abstract: The research at RoboDesign Lab intersects the study of the design, control, and dynamics of robots in parallel with human-machine interaction. We focus on developing hardware, software, and human-centered approaches to push the physical limits of robots to realize physically demanding tasks. In this talk, I will cover several ongoing research topics in the lab, such as the development of custom Human-Machine Interface (HMI) that enables bilateral teleoperation of mobile robots, a wheeled humanoid robot for dynamic mobile manipulation, actuation design for dynamic humanoid robots, and assistive devices for individuals with mobility impairment.

Bios:Joao Ramos is an Assistant Professor at the University of Illinois at Urbana-Champaign and the director of the RoboDesign Lab. He previously worked as a Postdoctoral Associate at the Biomimetic Robotics Laboratory, at the Massachusetts Institute of Technology. He received a PhD from the Department of Mechanical Engineering at MIT in 2018. He is the recipient of the 2021 NSF CARRER Award. His research focuses on the design and control of dynamic robotic systems, in addition to human-machine interfaces, legged locomotion dynamics, and actuation systems.

10/01/21 – Student Talks

Title: A multi-sensor fusion for agricultural autonomous navigation.

Speakers: Mateus Valverde Gasparino , UIUC

Abstract: In most agricultural setups, vehicles count on accurate GNSS estimations to autonomously navigate. However, for small under-canopy robots, it is not possible to guarantee accurate position estimation when between crop rows. To address this problem, we describe in this presentation a solution to autonomously navigate in a semi-structured agricultural environment. We demonstrate a navigation system that autonomously chooses between reference modalities to cover long areas in the farm and increase the navigation range to not only crop rows. By choosing the best reference to follow, the robot can accommodate for signal attenuation in GNSS and use the agricultural structure to autonomously navigate. A low-cost and compact robotic platform, designed to automate the measurements of plant traits, is used to implement and evaluate the system. We show two different perception systems that can be used on this framework: a LiDAR-based and a vision-based perception. We validate the system in a real agricultural environment, and we show it can effectively navigate for 4.5 km with only 6 human interventions.

Bios:Mateus Valverde Gasparino is a second-year Ph.D. advised by Prof Girish Chowdhary at the University of Illinois at Urbana-Champaign. He was awarded an M.Sc. degree in mechanical engineering and a Bachelor’s degree in mechatronics engineering from the University of São Paulo, Brazil. He is currently a graduate research assistant in the Distributed Autonomous Systems Laboratory (DASLab), and his research interests include perception systems, mapping, control, and learning for robots in unstructured and semi-structured outdoor environments.

Title: Learned Visual Navigation for Under-Canopy Agricultural Robots

Speakers: Arun Narenthiran Sivakumar , UIUC

Abstract: : We describe a system for visually guided autonomous navigation of under-canopy farm robots. Low-cost under-canopy robots can drive between crop rows under the plant canopy and accomplish tasks that are infeasible for over-the-canopy drones or larger agricultural equipment. However, autonomously navigating them under the canopy presents a number of challenges: unreliable GPS and LiDAR, high cost of sensing, challenging farm terrain, clutter due to leaves and weeds, and large variability in appearance over the season and across crop types. We address these challenges by building a modular system that leverages machine learning for robust and generalizable perception from monocular RGB images from low-cost cameras, and model predictive control for accurate control in challenging terrain. Our system, CropFollow, is able to autonomously drive 485 meters per intervention on average, outperforming a state-of-the-art LiDAR based system (286 meters per intervention) in extensive field testing spanning over 25 km.

Bios:Arun Narenthiran Sivakumar is a third year Ph.D. student in the Distributed Autonomous Systems Laboratory (DASLAB) at UIUC advised by Prof. Girish Chowdhary. He received his Bachelor’s degree in Mechanical Engineering in 2017 from VIT University, India and Master’s degree in Agricultural and Biological Systems Engineering with a minor in Computer Science in 2019 from the University of Nebraska, Lincoln. His research interests are applications of vision and learning based robotics in agriculture.

9/24/21 – Guest Talk

Title: Hello Robot: Democratizing Mobile Manipulation

[MORE INFO]

Speakers: Aaron Edsinger, CEO and Cofounder and Charlie Kemp, CTO and Cofounder, Hello Robot

Abstract: Mobile manipulators have the potential to improve life for everyone, yet adoption of this emerging technology has been limited. To encourage an inclusive future, Hello Robot developed the Stretch RE1, a compact and lightweight mobile manipulator for research that achieves a new level of affordability. The Stretch RE1 and Hello Robot’s open approach are inspiring a growing community of researchers to explore the future of mobile manipulation. In this talk, we will present the Stretch RE1 and the growing community and ecosystem around it. We will present our exciting collaboration with Professor Wendy Roger’s lab and provide a live demonstration of Stretch. Finally, we will be announcing the Stretch Robot Pitch Competition — a collaboration with TechSage and Proctor and Gamble — where students have the opportunity to generate novel design concepts for Stretch that address the needs of individuals aging with disabilities at home.

There will also be information during the seminar on a competition where winners will receive a cash prize and be able to work with Hello Robot’s Stretch robot in the McKechnie Family LIFE Home.

Bios:

Aaron Edsinger, CEO and Cofounder: Aaron has a passion for building robots and robot companies. He has founded four companies focused on commercializing human collaborative robots. Two of these companies, Meka Robotics and Redwood Robotics, were acquired by Google in 2013. As Robotics Director at Google, Aaron led the business, product, and technical development of two of Google’s central investments in robotics. Aaron received his Ph.D. from MIT CSAIL in 2007.

Charlie Kemp, CTO and Cofounder: Charlie is a recognized expert on mobile manipulation. In 2007, he founded the Healthcare Robotics Lab at Georgia Tech, where he is an associate professor in the Department of Biomedical Engineering. His lab has conducted extensive research on mobile manipulators to assist older adults and people with disabilities. Charlie earned a B.S., M.Eng., and Ph.D. from MIT. He first met Aaron while conducting research in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).

 


Previous Talks:

See [MORE INFO] for links to speaker webpages:

This Semester’s First Talk:

9/17/21 – Faculty Talk

Title: Introduction of KIMLAB (Kinetic Intelligent Machine LAB)

 

Speaker:  Professor Kim, Associate Professor of Electrical and Computer Engineering, UIUC

Abstract:  In this talk, I will share what is going on in KIMLAB, Kinetic Intelligent Machine LAB. I will briefly introduce myself. And then I will introduce some robots, research, and equipment in KIMLAB. Also, I will explain how the current efforts are related to my previous research and future directions.

Bio:  Joohyung Kim is currently an Associate Professor of Electrical and Computer Engineering at the University of Illinois Urbana-Champaign. His research focuses on design and control for humanoid robots, systems for motion learning in robot hardware, and safe human-robot interaction. He received BSE and Ph.D. degrees in Electrical Engineering and Computer Science (EECS) from Seoul National University, Korea. Prior to joining UIUC, He was a Research Scientist in Disney Research doing research for animation character robots.

 

 

5/07/21 – Student Talks

Title: A Comparison Between Joint Space and Task Space Mappings for Dynamic Teleoperation of an Anthropomorphic Robotic Arm in Reaction Tests

Speaker: Sunyu, UIUC 

Abstract: Teleoperation—i.e., controlling a robot with human motion—proves promising in enabling a humanoid robot to move as dynamically as a human. But how to map human motion to a humanoid robot matters because a human and a humanoid robot rarely have identical topologies and dimensions. This work presents an experimental study that utilizes reaction tests to compare joint space and task space mappings for dynamic teleoperation of an anthropomorphic robotic arm that possesses human-level dynamic motion capabilities. The experimental results suggest that the robot achieved similar and, in some cases, human-level dynamic performances with both mappings for the six participating human subjects. All subjects became proficient at teleoperating the robot with both mappings after practice, despite that the subjects and the robot differed in size and link length ratio and that the teleoperation required the subjects to move unintuitively. Yet, most subjects developed their teleoperation proficiencies more quickly with task space mapping than with joint space mapping after similar amounts of practice. This study also indicates the potential values of three-dimensional task space mapping, a teleoperation training simulator, and force feedback to the human pilot for intuitive and dynamic teleoperation of a humanoid robot’s arms.

TitleSafe and Efficient Robot Learning Using Structured Policies

SpeakerAnqi Li, UW

AbstractTraditionally, modeling and control techniques have been regarded as the fundamental tools for studying robotic systems. Although they can provide theoretical guarantees, these tools make limiting modeling assumptions. Recently, learning-based methods have shown success in tackling problems that are challenging for traditional techniques. Despite its advantages, it is unrealistic to directly apply most learning algorithms to robotic systems due to issues such as sample complexity and safety concerns. In this line of work, we aim at making robot learning explainable, sample efficient, and safe by construction through encoding structure into policy classes. In particular, we focus on a class of structured policies for robotic problems with multiple objectives. The complex motions are generated by combining simple behaviors given by Riemannian Motion Policies (RMPs). It can be shown that the combined policy is stable if the individual policies satisfy a class of control Lyapunov conditions, which can imply safety. Given such a policy representation, we learn policies with such structure so that formal guarantees are provided. To do so, we keep the safety-critical policies, e.g. collision avoidance and joint limits policies, as fixed during learning. We can also make use of the known robot kinematics. We show learning with such structure effective on a number of learning from human demonstration tasks and reinforcement learning tasks.

4/30/21 – Faculty Talks

Title: Robotic Manipulation – From Representations to Actions

Speaker: Prof. Kaiyu Hang, Rice

Abstract: Dexterous manipulation is an integral task involving a number of sub­problems, such as perception, planning, and control. Problem representations, which are essential elements in a system defining what is actually the problem being considered, determines both the capability of a system and the feasibility of applying such a system in real tasks.

In this talk, I will introduce how good representations can convert difficult problems into easier ones. First, I will discuss the development of representations for grasp optimization, and show how a representation can simplify and unify the whole grasping system, including globally optimal grasp planning, sensing, adaptation, and control. By adapting such representations in various task scenarios, I further show how they can greatly facilitate other applications, such as grasp-­aware motion planning, optimal placement planning, and even dual­-arm manipulation. Second, I will introduce our work on underactuated manipulation using soft robotic hands. For underactuated hands without any joint encoders or tactile sensors, I present our representations that can enable a robot to interact with tabletop objects using nonprehensile manipulation to finally grasp it. After a grasp is obtained by such sensorless hands, I discuss our system that can register the object into its own hand-­object system via interactive perception, so as to eventually enable precise and controllable in-hand manipulation.

4/23/21 – Student Talks

Title: Optimization-based Control for Highly Dynamic Legged Locomotion

Speaker: Dr. Yanran Ding, UIUC 

Abstract: Legged animals in nature can perform highly dynamic movements elegantly and efficiently, whether it be running down a steep hill or leaping between branches. To transfer part of the animal agility to a legged robot would open countless possibilities in disaster response, transportation, and space exploration. The topic of this talk is motion control in highly dynamic legged locomotion applications. In this talk, instantaneous control of a small and agile quadruped Panther is presented in a squat jumping experiment where it reached a maximal height of 0.7 m using a quadratic program (QP)-based reactive controller. A short prediction horizon control is achieved in real-time with the model predictive control (MPC) framework. We present a representation-free MPC (RF-MPC) formulation that directly uses the rotation matrix to describe orientation, which enables complex 3D acrobatic motions that previously unachievable using Euler angles due to the presence of singularity. We experimentally validate the motion control methods on Panther.

TitleHand Modeling and Simulation Using Stabilized Magnetic Resonance Imaging

Speaker: Bohan Wang, USC

AbstractWe demonstrate how to acquire complete human hand bone anatomy (meshes) in multiple poses using magnetic resonance imaging (MRI). Such acquisition was previously difficult because MRI scans must be long for high-precision results (over 10 minutes) and because humans cannot hold the hand perfectly still in non-trivial and badly supported poses. We invent a manufacturing process whereby we use lifecasting materials commonly employed in film special effects industry to generate hand molds, personalized to the subject, and to each pose. These molds are both ergonomic and encasing, and they stabilize the hand during scanning. We also demonstrate how to efficiently segment the MRI scans into individual bone meshes in all poses, and how to correspond each bone’s mesh to same mesh connectivity across all poses. Next, we interpolate and extrapolate the MRI-acquired bone meshes to the entire range of motion of the hand, producing an accurate data-driven animation-ready rig for bone meshes. We also demonstrate how to acquire not just bone geometry (using MRI) in each pose, but also a matching highly accurate surface geometry (using optical scanners) in each pose, modeling skin pores and wrinkles. We also give a soft tissue Finite Element Method simulation “rig”, consisting of novel tet meshing for stability at the joints, spatially varying geometric and material detail, and quality constraints to the acquired skeleton kinematic rig. Given an animation sequence of hand joint angles, our FEM soft tissue rig produces quality hand surface shapes in arbitrary poses in the hand range of motion. Our results qualitatively reproduce important features seen in the photographs of the subject’s hand, such as similar overall organic shape and fold formation.

 

4/16/21 – Faculty Talks

Title: Compositional Learning for Robot Autonomy via Modularity and Abstraction

Speaker: Dr. Rahul Shome, Rice

Abstract: Endowing robots the ability to solve real-world problems needs a careful design of approaches that can address tasks that can involve multiple robots, objects, and motions. Recent results have demonstrated a scalable, roadmap-based approach (dRRT*) to effectively decompose the search space to plan motions for multiple articulated robots. Variants of task and motion planning problems that require object rearrangement have been mapped to other efficiently solvable combinatorial problems over high-level structures that can guide effective solution discovery. Recent insights into the theoretical property of asymptotic optimality can inform the design of methods that provide solution quality guarantees in planning for a rich class of tasks and motions.

4/09/21 – Student Talks

Title: Long-Term Pedestrian Trajectory Prediction Using Mutable Intention Filter and Warp LSTM

Speaker: Zhe Huang, UIUC 

Abstract: Trajectory prediction is one of the key capabilities for robots to safely navigate and interact with pedestrians. Critical insights from human intention and behavioral patterns need to be integrated to effectively forecast long-term pedestrian behavior. Thus, we propose a framework incorporating a mutable intention filter and a Warp LSTM (MIF-WLSTM) to simultaneously estimate human intention and perform trajectory prediction. The mutable intention filter is inspired by particle filtering and genetic algorithms, where particles represent intention hypotheses that can be mutated throughout the pedestrian’s motion. Instead of predicting sequential displacement over time, our Warp LSTM learns to generate offsets on a full trajectory predicted by a nominal intention-aware linear model, which considers the intention hypotheses during filtering process. Through experiments on a publicly available dataset, we show that our method outperforms baseline approaches and demonstrate the robust performance of our method under abnormal intention-changing scenarios.

TitleRobot Learning through Interactions with Humans

Speaker: Shuijing Liu, UIUC

AbstractAs robots are becoming prevalent in people’s daily lives, it is important for them to learn to make intelligent decisions in interactive environments with humans. In this talk, I will present our recent works on learning-based robot decision making, through different types of human-robot interactions. In one line of work, we study robot navigation in human crowds and propose a novel deep neural network that enables the robot to reason about its spatial and temporal relationships with humans. In addition, we seek to improve human-robot collaboration in crowd navigation through active human intent estimation. In another line of work, we explore the interpretation of sound for robot decision making, inspired by human speech comprehension. Similar to how humans map a sound to meaning, we propose an end-to-end deep neural network that directly interprets sound commands for visual-based decision making. We continue this work by developing robot sensorimotor contingency with sound, sight, and motors through self-supervised learning.

 

4/02/21 – Faculty Talks

Title: Compositional Learning for Robot Autonomy via Modularity and Abstraction

Speaker: Assistant Professor Yuke Zhu, UT Austin

Abstract: Building robot intelligence for long-term autonomy demands robust perception and decision-making algorithms at scale. Recent advances in deep learning have achieved impressive results on end-to-end learning of robot behaviors from pixels to torques. However, the prohibitive costs of training for sophisticated behaviors have told us: “There is no ladder to the moon.” I argue that the functional decomposition of the pixels-to-torques problem via modularity and abstraction is the key to scaling up robot learning methods. In this talk, I will present our recent work on compositional modeling of robot autonomy. I will discuss our algorithms for developing state and action abstractions from raw signals. With these abstractions, I will introduce our work on neuro-symbolic planners that achieve compositional generalization in long-horizon manipulation tasks.

 

3/26/21 – Student Talks

Title: Control, Estimation and Planning for Coordinated Transport of a Slung Load By a Team of Aerial Robots

Speaker: Junyi Geng, UIUC 

Abstract: This talk will discuss the development of a self-contained transportation system that uses multiple autonomous aerial robots to cooperatively transport a single slung load. A “load-leading” concept is proposed and developed in this cooperative transportation problem. Different from existing approaches that usually fly a formation and treat the external slung load as disturbance, which ignore the payload dynamics, this approach proposes to attach sensors onboard the payload so that the payload can sense itself and lead the whole fleet. This unique design leads to a hierarchical load-leading control strategy, which is scalable and allows human-in-the-loop in addition to fully autonomous operation. It also enables a strategy for estimating payload parameters so as to improve the model accuracy. By manipulating the payload through the cables driven by the drones, the payload inertial parameters can be estimated. The close-loop performance is thus improved. The payload design also leads to convenient cooperative trajectory planning, which reduces to a simpler planning problem for the payload. Lastly, a load distribution based trajectory planning and control approach is developed to achieve near equal load distribution among the aerial vehicles for energy efficiency. This whole payload leading design enables the cooperative transportation team to fly longer, further and smarter. Components of this system are tested in simulation, indoor and outdoor flight experiments and demonstrate the effectiveness of the developed slung load transportation system.

Title: REDMAX: Efficient & Flexible Approach for Articulated Dynamics

Speaker: Ying Wang, Taxes A&M

Abstract: Industrial manipulators do not collapse under their own weight when powered off due to the friction in their joints. Although these mechanism are effective for stiff position control of pick-and-place, they are inappropriate for legged robots which must rapidly regulate compliant interactions with the environment. However, no metric exists to quantify the robot’s perform degradation due to mechanical losses in the actuators. We provide a novel formulation which describes how the efficiency of individual actuators propagate to the equations of motion of the whole robot. We quantitatively demonstrate the intuitive fact that the apparent inertia of the robots increase in the presence of joint friction. We also reproduce the empirical result that robots which employ high gearing and low efficiency actuators can statically sustain more substantial external loads. We expect that this framework will provide the foundations to design the next generation of legged robots which can effectively interact with the world.

3/12/21 – Student Talks

Title: Creating Practical Magnetic Indoor Positioning Systems

Speaker: David, Hanley, UIUC 

Abstract: Steel studs, HVAC systems, rebar, and many other building components produce spatially varying magnetic fields. Magnetometers can measure these fields and can be used in combination with inertial sensors for indoor navigation of robots and handheld devices like smartphones. This talk will take an empirical approach to improving the performance of magnetic field-based navigation systems in practice. In support of this goal, a dataset—to improve empirical studies of these systems within the research community—shall be described. Then the impact a commonly used “planar assumption” has on the accuracy of current magnetic field-based navigation systems will be presented. The lack of robustness shown in this evaluation motivates both new algorithms for this type of navigation and new hardware, progress on both of which will be discussed.

TitleResidual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis 

Speaker: Sim, Youngwoo, UIUC

Abstract: Industrial manipulators do not collapse under their own weight when powered off due to the friction in their joints. Although these mechanism are effective for stiff position control of pick-and-place, they are inappropriate for legged robots which must rapidly regulate compliant interactions with the environment. However, no metric exists to quantify the robot’s perform degradation due to mechanical losses in the actuators. We provide a novel formulation which describes how the efficiency of individual actuators propagate to the equations of motion of the whole robot. We quantitatively demonstrate the intuitive fact that the apparent inertia of the robots increase in the presence of joint friction. We also reproduce the empirical result that robots which employ high gearing and low efficiency actuators can statically sustain more substantial external loads. We expect that this framework will provide the foundations to design the next generation of legged robots which can effectively interact with the world.

3/05/21 – Faculty Talks

Title: Collaborative Construction and Communication with Minecraft

Speaker: Associate Professor Julia Hockenmaier, UIUC

Abstract: Virtual gaming platforms such as Minecraft allow us to study situated natural language generation and understanding tasks for agents that operate in complex 3D environments.   In this talk, I will present work done by my group on defining a collaborative Blocks World construction task in Minecraft. In this task, one player (the Architect) needs to instruct another (the Builder) via a chat interface to construct a given target structure that only the Architect is shown. Although humans easily complete this task (often after lengthy back-and-forth dialogue), creating agents for each of this role poses a number of challenges for current NLP technologies. To understand these challenges, I will describe the dataset we have collected for this task, as well as the models that we have developed for both roles. I look forward to a discussion for how to adapt this work to natural language communication with actual robots rather than simulated agents.

Bio: Julia Hockenmaier is an associate professor at the University of Illinois at Urbana-Champaign. She has received a CAREER award for her work on CCG-based grammar induction and an IJCAI-JAIR Best Paper Prize for her work on image description. She has served as member and chair of the NAACL board, president of SIGNLL, and as program chair of CoNLL 2013 and EMNLP 2018.

 

2/19/21 – Student Talks

Title: Optimization-Based Visuotactile Deformable Object Capture 

Speaker: Zherong Pan, UIUC 

Abstract: Robot interacts with deformable objects all the time and relies on the perception system to estimate their state. While the shape of an object can be captured visually, their physical properties must be estimated by tactile interactions. We propose an optimization-based formulation that reconstructs a simulation-readydeformable object from multiple drape shapes under gravity. Starting from a trivial initial guess, our method optimizes both the rest shape and the material parameters to register the mesh with observed multi-view point cloud data, where we derive analytic gradients from implicit function theorem. We further interleave the optimization with remeshing operators to ensure a high quality of mesh. Experiments on beam recovery problems show that our optimizer can infer internalanisotropic material distributions and a large variation of rest shapes.

TitleResidual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis 

SpeakerYe Yuan, CMU 

Abstract: Reinforcement learning has shown great promise for synthesizing realistic human behaviors by learning humanoid control policies from motion capture data. However, it is still very challenging to reproduce sophisticated human skills like ballet dance, or to stably imitate long-term human behaviors with complex transitions. The main difficulty lies in the dynamics mismatch between the humanoid model and real humans. That is, motions of real humans may not be physically possible for the humanoid model. To overcome the dynamics mismatch, we propose a novel approach, residual force control (RFC), that augments a humanoid control policy by adding external residual forces into the action space. During training, the RFC-based policy learns to apply residual forces to the humanoid to compensate for the dynamics mismatch and better imitate the reference motion. Experiments on a wide range of dynamic motions demonstrate that our approach outperforms state-of-the-art methods in terms of convergence speed and the quality of learned motions. Notably, we showcase a physics-based virtual character empowered by RFC that can perform highly agile ballet dance moves such as pirouette, arabesque and jeté. Furthermore, we propose a dual-policy control framework, where a kinematic policy and an RFC-based policy work in tandem to synthesize multi-modal infinite-horizon human motions without any task guidance or user input. Our approach is the first humanoid control method that successfully learns from a large-scale human motion dataset (Human3.6M) and generates diverse long-term motions. 

 

2/12/21 – Opening Panel Discussion

How COVID-19 Affects Your Research?

Abstract: The COVID-19 pandemic is associated with an unprecedented impact on US academia and research enterprises. On the downside, university revenues have declined due to undergraduate enrollment drops and unstable external funding sources. Many traditional research activities have been suspended since last spring, especially in the STEM fieldsThis also revealed a limitation in collaboration and communication facilities and services. On the upside, however, the pandemic has served as a catalyst for the increased use and innovation in robotics. This involves autonomous devices foinfection control, temperature taking, and movement tracking. In addition, social robotics and virtual avatars help people stay connected and reduce anxiety during the quarantine. In this seminar, we would invite four faculty membersNegar Mehr, Joao Ramos, Geir Dullerud, and Kris Hauser, to share their experience on the challenges and opportunities brought by the pandemic. 

 


Talks from Inaugural Semester:

The Robots are Coming – to your Farm!  Autonomous and Intelligent Robots in Unstructured Field Environments

Girish Chowdhary
Assistant Professor
Agricultural & Biological Engineering
Distributed Autonomous Systems Lab
November 22nd, 2019

Abstract: What if a team of collaborative autonomous robots grew your food for you? In this talk, I will discuss some key advances in robotics, machine learning, and autonomy that will one day enable teams of small robots to grow food for you in your backyard in a fundamentally more sustainable way than modern mega-farms. Teams of small aerial and ground robots could be a potential solution to many of the problems that modern agriculture faces. However, fully autonomous robots that operate without supervision for weeks, months, or entire growing season are not yet practical. I will discuss my group’s theoretical and practical work towards the underlying challenging problems in autonomy, sensing, and learning. I will begin with our lightweight, compact, and highly autonomous field robot TerraSentia and their recent successes in high-throughput phenotyping for agriculture. I will also discuss new algorithms for enabling a team of robots to weed large agricultural farms autonomously under partial observability. These direct applications will then lead up to my group’s more fundamental work in reinforcement learning and adaptive control that we believe are necessary to usher in the next generation of autonomous field robots that operate in harsh, changing, and dynamic environments.

Bio: Girish Chowdhary is an assistant professor at the University of Illinois at Urbana-Champaign with the Coordinated Science Laboratory, and the director of the Distributed Autonomous Systems laboratory at UIUC. At UIUC, Girish is affiliated with Agricultural and Biological Engineering, Aerospace Engineering, Computer Science, and Electrical Engineering. He holds a PhD (2010) from Georgia Institute of Technology in Aerospace Engineering. He was a postdoc at the Laboratory for Information and Decision Systems (LIDS) of the Massachusetts Institute of Technology (2011-2013), and an assistant professor at Oklahoma State University’s Mechanical and Aerospace Engineering department (2013-2016). He also worked with the German Aerospace Center’s (DLR’s) Institute of Flight Systems for around three years (2003-2006). Girish’s ongoing research interest is in theoretical insights and practical algorithms for adaptive autonomy, with a particular focus on field-robotics. He has authored over 90 peer reviewed publications in various areas of adaptive control, robotics, and autonomy. On the practical side, Girish has led the development and flight-testing of over 10 research UAS platform. UAS autopilots based on Girish’s work have been designed and flight-tested on six UASs, including by independent international institutions. Girish is an investigator on NSF, AFOSR, NASA, ARPA-E, and DOE grants. He is the winner of the Air Force Young Investigator Award, the Aerospace Guidance and Controls Systems Committee Dave Ward Memorial award, and several best paper awards, including a best systems paper award at RSS 2018 for his work on the agricultural robot TerraSentia. He is the co-founder of EarthSense Inc., working to make ultralight outdoor robotics a reality.

 

Student Talks:

Design and Control of a Quadrupedal Robot for Dynamic Locomotion

Yanran Ding
November 15th, 2019

Abstract: Legged animals have shown their versatile mobility to traverse challenging terrains via a variety of well-coordinated dynamic motions. This remarkable mobility of legged animals inspired the development of many legged robots and associated research works seeking for dynamic legged locomotion in robots. This talk explores the design and control of a small-scale quadrupedal robot prototype for dynamic motions. Here we present a hardware-software co-design scheme for the proprioceptive actuator and the model predictive control (MPC) framework for a wide variety of dynamic motions

Bio: Yanran Ding is a 4-th year Ph.D. student in the Mechanical Science and Engineering Department at University of Illinois at Urbana-Champaign. He received his B.S. degree in Mechanical Engineering from UM-SJTU joint institute, Shanghai Jiao Tong University, Shanghai, China in 2015. His research interests include the design of agile robotic systems and optimization-based control for legged robots to achieve dynamic motions. He is one of the best student paper finalists in the International Conference on Intelligent Robots and Systems (IROS) 2017.

Adapt-to-Learn:  Policy Transfer in Reinforcement Learning

Girish Joshi
November 15th, 2019

Abstract: Efficient and robust policy transfer remains a key challenge in reinforcement learning. Policy transfer through warm initialization, imitation, or interacting over a large set of agents with randomized instances, have been commonly applied to solve a variety of Reinforcement Learning (RL) tasks. However, this is far from how behavior transfer happens in the biological world: Humans and animals are able to quickly adapt the learned behaviors between similar tasks and learn new skills when presented with new situations. We introduce a principled mechanism that can “Adapt-to-Learn”, that is adapt the source policy to learn to solve a target task with significant transition differences and uncertainties.  We show through theory and experiments that our method leads to a significantly reduced sample complexity of transferring the policies between the tasks.

Bio: Girish Joshi is graduate student at DASLAB UIUC working under Dr. Girish Chowdhary. Prior to joining UIUC he did his masters in Indian Institute of Science Bangalore. His research interest are in Sample Efficient Policy Transfer in RL, Cross Domain skill transfer in RL, Information enabled Adaptive Control for Cyber-Physical Systems and Bayesian Nonparametric Approach in Adaptive Control and Decision making in Non-Stationary Environment.

 

Designing Robots to Support Successful Aging: Potential and Challenges

Wendy Rogers
Professor, Kinesiology and Community Health
Human Factors and Aging Laboratory
November 8th, 2019

Abstract: There is much potential for robots to support older adults in their goal of successful aging with high quality of life.  However, for human-robot interactions to be successful, the robots must be designed with user needs, preferences, and attitudes in mind.  The Human Factors and Aging Laboratory (www.hfaging.org) is specifically oriented toward developing a fundamental understanding of aging and bringing that knowledge to bear on design issues important to the enjoyment, quality, and safety of everyday activities of older adults.  In this presentation, I will provide an overview of our research with robots: personal, social, telepresence.  We focus on the human side of human-robot interaction, answering questions such as, are older adults willing to interact with a robot?  What do they want the robot to do?  To look like?  How do they want to communicate with a robot?  Through research examples, I will illustrate the potential for robots to support successful aging as well as the challenges that remain for the design and widespread deployment of robots in this context.

Bio: Wendy A. Rogers, Ph.D. – Shahid and Ann Carlson Khan Professor of Applied Health Sciences at the University of Illinois Urbana-Champaign.  Her primary appointment is in the Department of Kinesiology and Community Health.  She also has an appointment in the Educational Psychology Department and is an affiliate faculty member of the Beckman Institute and the Illinois Informatics Institute. She received her B.A. from the University of Massachusetts – Dartmouth, and her M.S. (1989) and Ph.D. (1991) from the Georgia Institute of Technology.  She is a Certified Human Factors Professional (BCPE Certificate #1539). Her research interests include design for aging; technology acceptance; human-automation interaction; aging-in-place; human-robot interaction; aging with disabilities; cognitive aging; and skill acquisition and training.  She is the Director of the Health Technology Graduate Program; Program Director of CHART (Collaborations in Health, Aging, Research, and Technology); and Director of the Human Factors and Aging Laboratory (www.hfaging.org). Her research is funded by: the National Institutes of Health (National Institute on Aging) as part of the Center for Research and Education on Aging and Technology Enhancement (www.create-center.org); and the Department of Health and Human Services (National Institute on Disability, Independent Living, and Rehabilitation Research; NIDILRR) Rehabilitation Engineering Research Center on Technologies to Support Aging-in-Place for People with Long-term Disabilities (www.rercTechSAge.org).  She is a fellow of the American Psychological Association, the Gerontological Society of America, and the Human Factors and Ergonomics Society.

 

Student Talks:

Towards Soft Continuum Arms for Real World Applications

Naveen Kumar Uppalapati
November 1st, 2019

Abstract: Soft robots are gaining significant attention from the robotics community due to their adaptability, safety, lightweight construction, and cost-effective manufacturing. They have found use in manipulation, locomotion, and wearable devices. In manipulation, Soft Continuum Arms (SCAs) are used to explore uneven terrains, handle objects of different sizes, and interact safely with the environment. Current SCAs uses a serial combination of multiple segments to achieve higher dexterity and workspace. However, serial architecture leads to an increase in overall weight, hardware, and power requirements, thus limiting their use for real-world applications. In this talk, I will give an insight into the design of compact and lightweight SCAs. The SCAs use pneumatically actuated Fiber Reinforced Elastomeric Enclosures (FREEs) as their building blocks. A single section BR2 SCA design is shown to have greater dexterity and workspace than the current state of art SCAs. I will present a hybrid between the soft arm and rigid links known as Variable Length Nested Soft (VaLeNS) arm, which was designed to obtain the attributes of stiffness modulation and force transfer. Finally, I will present a mobile robot prototype for berry picking application.

Bio: Naveen Kumar Uppalapati is a 6th year Ph.D. student in the Dept. of Industrial and Enterprise Systems Engineering at the University of Illinois. He received his bachelor’s degree in Instrumentation and Control Engineering in 2013 from the National Institute of Technology, Tiruchirappalli, and Master’s degree in Systems engineering in 2016 from the University of Illinois. His research interests are design and modeling of soft robots, sensors design, and controls.

Toward Human-like Teleoperated Robot Motion: Performance and Perception of a Choreography-inspired Method in Static and Dynamic Tasks for Rapid Pose Selection of Articulated Robots

Allison Bushman
November 1st, 2019

Abstract: In some applications, operators may want to create fluid, human-like motion on a remotely-operated robot, for example, a device used for remote telepresence in hostage negotiation. This paper examines two methods of controlling the pose of a Baxter robot via an Xbox One controller. The first method is a joint-by-joint (JBJ) method in which one joint of each limb is specified in sequence. The second method of control, named Robot Choreography Center (RCC), utilizes choreographic abstractions in order to simultaneously move multiple joints of the limb of the robot in a predictable manner. Thirty-eight users were asked to perform four tasks with each method. Success rate and duration of successfully completed tasks were used to analyze the performances of the participants. Analysis of the preferences of the users found that the joint-by-joint (JBJ) method was considered to be more precise, easier to use, safer, and more articulate, while the choreography-inspired (RCC) method of control was perceived as faster, more fluid, and more expressive. Moreover, performance data found that while both methods of control were over 80\% successful for the two static tasks, the RCC method was an average of 11.85\% more successful for the two more difficult, dynamic tasks. Future work will leverage this framework to investigate ideas of fluidity, expressivity, and human-likeness in robotic motion through online user studies with larger participant pools.

Bio: Allison Bushman is a second-year master’s student in the Dept. of Mechanical Engineering and Materials Science at the University of Illinois at Urbana-Champaign. She received her bachelor’s degree in mechanical engineering in 2014 from Yale University. She currently works in the RAD Lab to understand what parameters are necessary in deeming a movement as natural or fluid, particularly as it pertains to designing movement in robots.

 

Dynamic Synchronization of  Human Operator and Humanoid Robot via Bilateral Feedback Teleoperation

Joao Ramos
Assistant Professor
Mechanical Science and Engineering
[MORE INFO]

Abstract:

Autonomous humanoid robots are still far from matching the sophistication and adaptability of human’s perception and motor control performance. To address this issue, I investigate the utilization of human whole-body motion to command a remote humanoid robot in real-time, while providing the operator with physical feedback from the robot’s actions. In this talk, I will present the challenges of virtually connecting the human operator with a remote machine in a way that allows the operator to utilize innate motor intelligence to control the robot’s interaction with the environment. I present pilot experiments in which an operator controls a humanoid robot to perform power manipulation tasks, such as swinging a firefighter axe to break a wall, and dynamic locomotion behaviors, such as walking and jumping.

Bio:

Joao Ramos is an Assistant Professor at the University of Illinois at Urbana-Champaign. He previously worked as a Postdoctoral Associate working at the Biomimetic Robotics Laboratory, at the Massachusetts Institute of Technology. He received a PhD from the Department of Mechanical Engineering at MIT in 2018. During his doctoral research, he developed teleoperation systems and strategies to dynamically control a humanoid robot utilizing human whole-body motion via bilateral feedback. His research focus on the design and control of robotic systems that experiences large forces and impacts, such as the MIT HERMES humanoid, a prototype platform for disaster response. Additionally, his research interests include human-machine interfaces, legged locomotion dynamics, and actuation systems.

 

Some Thoughts on Learning Reward Functions

Bradly Stadie
Post-Doctoral Researcher
Vector Institute in Toronto
[MORE INFO]

Abstract:

In reinforcement learning (RL), agents typically optimize a reward function to learn a desired behavior. In practice, crafting reward functions that produce intended behaviors is fiendishly difficult. Due to the curse of dimensionality, sparse rewards are typically too difficult to optimize without carefully chosen curricula. Meanwhile, dense reward functions often encourage unintended behaviors or present overly cumbersome optimization landscapes. To handle these problems, a vast body of work on reward function design has emerged. In this talk, we will recast the reward function design problem into a learning problem. Specifically, we will consider two new algorithms for automatically learning reward functions. First, in Evolved Policy Gradients (EPG), we will carefully consider the problem of meta-learning reward functions. Given a distribution of tasks, can we meta-learn a parameterized reward function that generalizes to new tasks? Does this learned reward allow the agent to solve new tasks more efficiently than our original hand-designed rewards? Second, in Learning Intrinsic Rewards as a Bi-Level Optimization Problem, we consider the problem of learning a more effective reward function in the single-task setting. By using Self-Tuning Networks and tricks from the hyper-parameter optimization literature, we develop an algorithm that produces a better optimization landscape for the agent to learn against. This better optimization landscape ultimately allows the agent to achieve superior performance on a variety of challenging locomotion tasks, when compared to simply learning against the original hand-designed reward.

Bio:

Bradly Stadie is a postdoctoral researcher at the Vector Institute in Toronto, where he works with Jimmy Ba’s group. Bradly’s overarching research goal is to develop algorithms that allow machines learn as quickly and flexibly as humans do. At Toronto, Bradly has worked on a variety of topics including reward function learning, causal inference, neural network compression, and one shot imitation learning. Earlier in his career, he provided one of the first algorithms for efficient exploration in deep reinforcement learning. Bradly completed his PhD under Pieter Abbeel at UC Berkeley. He received his undergraduate degree in mathematics from the University of Chicago.

 

Human-like Robots and Robotic Humans: Who Engineers Who?

Ben Grosser
Associate Professor, School of Art + Design / NCSA
[MORE INFO]

Abstract:

For a while now we’ve watched robots regularly take on new human tasks, especially those that can be made algorithmic such as vacuuming the floor. But the same time frame has also seen growing numbers of experiments with artistic robots, machines made by artists that take on aesthetic tasks of production in art or music. This talk will focus on the complicated relationship between humans and machines by looking at a number of artworks by the author. These will include not only art making robots that many perceive as increasingly human, but also code-based manipulations of popular software systems that reveal how humans are becoming increasingly robotic. In an era when machines act like humans and humans act like machines, who is engineering who?

Bio:

Artist Ben Grosser creates interactive experiences, machines, and systems that examine the cultural, social, and political effects of software. Recent exhibition venues include the Barbican Centre in London, Museum Kesselhaus in Berlin, Museu das Comunicações in Lisbon, and Galerie Charlot in Paris. His works have been featured in The New Yorker, WiredThe AtlanticThe Guardian, The Washington Post, El País, Libération, Süddeutsche Zeitung, and Der Spiegel. The Chicago Tribune called him the “unrivaled king of ominous gibberish.” Slate referred to his work as “creative civil disobedience in the digital age.” His artworks are regularly cited in books investigating the cultural effects of technology, including The Age of Surveillance Capitalism, The Metainterface, Facebook Society, and Technologies of Vision, as well as volumes centered on computational art practices such as Electronic Literature, The New Aesthetic and Art, and Digital Art. Grosser is an associate professor in the School of Art + Design, co-founder of the Critical Technology Studies Lab at NCSA, and a faculty affiliate with the Unit for Criticism and the School of Information Sciences. https://bengrosser.com

Student Talks:

Multi-Contact Humanoid Fall Mitigation In Cluttered Environment

Shihao Wang
October 4, 2019

Abstract:

Humanoid robots are expected to take critical roles in our real-world in the future. However, this dream cannot be achieved until a reliable fall mitigation strategy has been proposed and validated. Due to a high center of mass position of the robot, robots have a high risk of falling down to the ground. In this case, we would like robot to utilize its nearby environment object for fall protection. This presentation discusses about my past work on robot fall protection with planning multi-contact fashion for fall recovery in cluttered environment. We believe that the capability of making use of robot’s contact(s) contributes an effective solution for fall protection.

Bio:

Shihao Wang is a 4th-year Ph.D. student of Department of Mechanical Engineering and Materials Science at Duke University. He is originally from China, received Bachelor degree in Mechanical Engineering from Beihang University in June 2014 and received Master degree in Mechanical Engineering from Cornell University in June 2015. After one year research at Penn State University, he joined Duke in Fall 2016 for Ph.D, research which is focused on Robotics, Legged Locomotion, Dynamic Walking and Controls.

 

Optimal Control Learning by Mixture of Experts

Gao Tang
October 4, 2019

Abstract:

Optimal control problems are critical to solve for task efficiency. However, the nonconvexity limits its application, especially in time-critical tasks. Practical applications often require a parametric optimization problem being solved which is essentially a mapping from problem parameters to problem solutions. We study how to learn this mapping from offline precomputation. Due to the existence of local optimum, the mapping may be discontinuous. This presentation discusses how to use the mixture of experts model to learn this discontinuous function accurately to achieve high reliability in robotic applications.

Bio:

Gao Tang is a 4-th year Ph.D. student of the Department of Computer Science at the University of Illinois at Urbana-Champaign. Before coming to UIUC, he spent 3 years as a Ph.D. student at Duke University. He received Bachelor and Master degree from Tsinghua University in Aerospace Engineering. His research is focused on numerical optimization and motion planning.

 

Bioinspired Aerial and Terrestrial Locomotion Strategies

[MORE INFO]

Aimy Wissa
Assistant Professor, Mechanical Science and Engineering
Bio-inspired Adaptive Morphology (BAM) Lab
Sept. 27th, 2019

Abstract:

Nature has evolved various locomotion (self-propulsion) and shape adaptation (morphing) strategies to survive and thrive in diverse and uncertain environments. Both in air and on the ground, natural organisms continue to surpass engineered unmanned aerial and ground vehicles. Key strategies that Nature often exploits include local elasticity and adaptiveness to simplify global actuation and control.  Unlike engineered systems, which rely heavily on active control, natural structures tend to also rely on reflexive and passive control. This approach of diverse control strategies yields multifunctional structures. Two examples of multifunctional structures will be presented in this talk, namely avian- inspired deployable structures and click beetle-inspired legless jumping mechanism.

The concept of wings as multifunctional adaptive structures will be discussed and several flight devices found on birds’ wings will be introduced as a pathway towards revolutionizing the current design of small-unmanned air vehicles. Experimental, analytical, and numerical results will be presented to discuss the efficacy of such devices. The discussion of avian-inspired devices will be followed by an introduction of a click beetle-inspired jumping mechanism that exploits distributed springs to circumvent muscle limitations, such a mechanism can bypass shortcomings of smart actuators especially in small-scale robotics applications.

 

Student Talks:

CyPhyHouse: A programming, simulation, and deployment toolchain for heterogeneous distributed coordination

[MORE INFO]

Ritwika Ghosh
September 20th, 2019

Abstract:

Programming languages, libraries, and development tools have transformed the application development processes for mobile computing and machine learning. CyPhyHouse is a toolchain that aims to provide similar programming, debugging, and deployment benefits for distributed mobile robotic applications. Users can develop hardware-agnostic, distributed applications using the high-level, event driven Koord programming language, without requiring expertise in controller design or distributed network protocols, I will talk about the CyPhyHouse toolchain, its design, implementation, challenges faced and lessons learnt in the process.

Bio:

Ritwika Ghosh is a 6th year PhD student in the dept. of Computer Science at the University of Illinois. She received her Bachelors in 2013 from Chennai Mathematical Institute in India in 2013 in Math and Computer Science. Her research interests are Formal Methods, Programming Langauges and Distributed Systems.

Controller Synthesis Made Real: Reachavoid Specifications and Linear Dynamics

Chuchu Fan
September 20th, 2019
CSL Studio Rm 1232

Abstract:

The controller synthesis question asks whether an input can be generated for a given system (or a plant) so that it achieves a given specification. Algorithms for answering this question hold the promise of automating controller design. They have the potential to yield high-assurance systems that are correct-by-construction, and even negative answers to the question can convey insights about the unrealizability of specifications. There has been a resurgence of interest in controller synthesis, with the rise of powerful tools and compelling applications such as vehicle path planning, motion control, circuits design, and various other engineering areas. In this talk, I will introduce a novel approach relying on symbolic sensitivity analysis to synthesize provably correct controllers efficiently for large linear systems with reach-avoid specifications. Our solution uses a combination of an open-loop controller and a tracking controller, thereby reducing the problem to smaller tractable problems such as satisfiability over quantifier-free linear real arithmetic. I will also present RealSyn, a tool implementing the synthesis algorithm, which has been shown to scale to several high-dimensional systems with complex reach-avoid specifications.

Bio:

Chuchu Fan is finishing up her Ph.D. in the Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She will join the AeroAstro Department of MIT as an assistant professor in 2020. She received her Bachelor’s degree from Tsinghua University, Department of Automation in 2013. Her research interests are in the areas of formal methods and control for safe autonomy.

Dancing With Robots: Questions About Composition with Natural and Artificial Bodies

[MORE INFO]

Amy LaViers – Robotics, Automation, and Dance (RAD) Lab
September 13th, 2019

Abstract:

The formulation of questions is a central yet non-specific activity: an answer can be sought through many modes of investigation, such as, scientific inquiry, research and development, or the creation of art.  This talk will outline guiding questions for the Robotics, Automation, and Dance (RAD) Lab, which are explored via artistic creation alongside research in robotics, each vein of inquiry informing the other, and, then, will focus on a few initial answers in the form of robot-augmented dances, digital spaces that track the motion of participants, artistic extensions to student engineering theses, and participatory performances that employ the audience’s own personal machines.  For example, guiding questions include: By what measure do robots outperform humans? By what measures do humans outperform robots?  How many ways can a human walk?  Is movement a continuous phenomenon?  Does it convey information?  What is the utility of dance?  What biases do new technologies hold?  What structures can reasonably be named “leg”, “arm”, “hand”, “wing” and the like? Why does dancing feel so different than programming?   What does it mean for two distinct bodies to move in unison?  How does it feel to move alongside a robot?  In order to frame these questions in an engineering context, this talk also presents an information-theoretic model of expression through motion, where artificial systems are modeled as a source communicating across a channel to a human receiver.

Bio:

Amy LaViers is an assistant professor in the Mechanical Science and Engineering Department at the University of Illinois at Urbana-Champaign (UIUC) and director of the Robotics, Automation, and Dance (RAD) Lab.  She is a recipient of a 2015 DARPA Young Faculty Award (YFA) and 2017 Director’s Fellowship.  Her teaching has been recognized on UIUC’s list of Teachers Ranked as Excellent By Their Students, with Outstanding distinction.  Her choreography has been presented internationally, including at Merce Cunningham’s studios, Joe’s Pub at the Public Theater, the Ferst Center for the Arts, and the Ammerman Center for Arts and Technology.  She is a co-founder of two startup companies:  AE Machines, Inc, an automation software company that won Product Design of the Year at the 4th Revolution Awards in Chicago in 2017 and was a finalist for Robot of the Year at Station F in Paris in 2018, and caali, LLC, an embodied media company that is developing an interactive installation at the Microsoft Technology Center in Chicago.  She completed a two-year Certification in Movement Analysis (CMA) in 2016 at the Laban/Bartenieff Institute of Movement Studies (LIMS).  Prior to UIUC she held a position as an assistant professor in systems and information engineering at the University of Virginia.  She completed her Ph.D. in electrical and computer engineering at Georgia Tech with a dissertation that included a live performance exploring stylized motion.  Her research began in her undergraduate thesis at Princeton University where she earned a certificate in dance and a degree in mechanical and aerospace engineering.