Robotics Seminar @ Illinois

The Illinois Robotics Group is proud to host the Robotics Seminar @ Illinois series.  These seminars provide a diverse lineup of speakers reflecting the interdisciplinary nature of the field of robotics.

We are hosting speakers conducting research in the field of robotics.  The talks are given by both professors and students each week, with occasional demonstrations afterwards in the Intelligent Robotics Lab.

Talks are held at 1pm on Friday’s virtually through Zoom, with some in-person talks viewed in the CSL Studio conference room (1232), which is just west of the Center for Autonomy Lab facilities.

Please Feel Free to Recommend Speakers for Future Talks

If you have comments or questions on the IRG Seminar Series, please feel free to contact John M. Hart, Manager & Coordinator of the CfA Shared Robotics Laboratories

Spring 2024 Schedule

  1. Jan. 26th — Professor Timothy Bretl (UIUC) 

    Recording link: https://uofi.box.com/s/rlw8msdrgosqtmzu433gtwn2a3yr31ji

  2. Feb. 2nd — Icebreaker Introduction, in person

  3. Feb. 9th — Professor Rob Platt (Northeastern), no recording

  4. Feb. 15th (Thursday) — External Panel Discussion on Robot Learning, in person

    Panelist: Dr. Andy Zeng (Google Deepmind), Prof. Yunzhu Li (UIUC), Jiayuan Mao (MIT)

    Topic: Foundation Models for Robot Learning; no recording

  5. Feb. 23th — Professor Mark Plecnik (Notre Dame)

    Recording link: https://uofi.box.com/s/rt3f9q3dohedfr99sad62p1ijsoog505

  6. March 1st — Research lightning talk, in person

    Intro: this event will consist of short faculty and student presentations to share their research insights, registration link: https://docs.google.com/forms/d/e/1FAIpQLSdNLdfT4BjZgmL5jl2jUjIB6MBriWhgCrHavkk25HLn9m1Lgg/viewform.

  7. March 8th — Professor Henny Admoni (CMU)

    Recording link: https://uofi.box.com/s/ib5f5v9dxhv4rgg7inpjynkqs6tpttv4

    March 15th — Pause for Spring Break

  8. March 22thProfessor Michael Posa (UPenn)

    Recording link: https://uofi.box.com/s/7zsydpq40v2nx5wlc1u9ugmzgj1trn44

  9. March 29th — External Student Talks

    Recording link: https://uofi.box.com/s/9rdz2y3wdoheet9e214r5wnd0a05o4m6

  10. April 5th — Canceled due to time conflict

    Recording link: TBA

  11. April 12th — Faculty Panel Discussion

    Recording link: TBA

  12. April 19th — Yan Gu (Purdue) 

    Recording link: TBA

  13. April 26th — External Student Debate

    Recording link: TBA

  14. May 3rd — Aadeel Akhtar (Psyonic)

    Recording link: TBA

  15. May 10th — ICRA Presentation Practice

    Recording link: TBA

Fall 2023 Schedule

  1. Sept. 1st — Professor David Fridovich-Keil (UT Austin) 

    Recording link: https://uofi.box.com/s/wtov1k9fb3x7p5zicy10wy8d25cu82bu

  2. Sept. 8th — Laura Treers (GaTech)

    Recording link: https://uofi.box.com/s/wtov1k9fb3x7p5zicy10wy8d25cu82bu

  3. Sept. 15th — Professor Ryan Truby (Northwestern)

    Recording link: https://uofi.box.com/s/bdxeq2jpphz4xg43g77bz1jlwp526ah2

  4. Sept. 22th — Professor Yunzhu Li (UIUC)

    Recording link: https://uofi.box.com/s/odlsgifpk52txkjje4oixq1kop2l3rcm

  5. Sept. 29th — Professor Anh-Van Ho (JAIST), in person

    Recording link: https://uofi.box.com/s/31vwj904sbk7zthc62x3mi1imkrfwydi

    Oct. 6th — Pause for IROS event

  6. Oct. 13th — Professor Ian Abraham (Yale)

    Recording link: https://uofi.box.com/s/o0m0gn2i52h4l5hykpyb61zq8bp2gp1u

  7. Oct. 20th — Rachel Gehlhar Humann (UCLA) 

    Recording link: https://uofi.box.com/s/7gvov5udxtbqeiqpbdsirhlx7cq5qo2u

  8. Oct. 27th — Professor Yanran Ding (Umich)

    Recording link: https://uofi.box.com/s/4y1vub1wauzznjw2ue5dslgeky2ombmf

  9. Nov. 3rd — Professor Wenzhen Yuan (UIUC), in person

    Recording link: No recording

  10. Nov. 10th — Professor Hao Zhang (UMass) 

    Recording link: https://uofi.box.com/s/njuzotzy99a16jqrlwm73cs4p0golboo

  11. Nov. 17th — Round Table Discussion on Safety in Robotics

    Recording link: No recording

    Nov. 24th — Thanksgiving break

  12. Dec. 1st — Jason Ma (UPenn)

    Recording link: TBA


Spring 2023 Schedule

  1. January 27th — Professor Matthew Gombolay (Georgia Tech) 

    Recording link: https://uofi.box.com/s/2gh8byltdvx5vyxihdqc9lie4kxnvv1q

  2. February 3rd — Professor Justin Yim (UIUC)

    Recording link: https://uofi.box.com/s/584lcogbawqfsysk5mraz9uks5w1nbh6

  3. February 10th — Professor Shuran Song (Columbia University)

    Recording link: https://uofi.box.com/s/jagw0mjnez37aiqfk00iqgueemw1ir7r

  4. February 17th — Andrea Bajscy (UC Berkeley)

    Recording link: https://uofi.box.com/s/vrgcq42u5t3avqmwxsf28w27z23wyfov

  5. February 24th — Professor Amanda Prorok (Cambridge)

    Recording link: https://uofi.box.com/s/hlzxrcpbj66npra85tnahnr9ewlzf0gh

  6. March 3rd — Glen Chou (MIT)

    Recording link: https://uofi.box.com/s/oaij8jo0oon7yye77rpn8x9rx464tx1a

  7. March 10th — Kyungseo Park (UIUC)

    Recording link: https://uofi.box.com/s/wri23ddc58du8fzuhundk5gmbjh36nmj

  8. March 24th — María Santos (Princeton)

    Recording link: https://uofi.box.com/s/1vh26s5lpit3s077zbnj0v3opsv08ge9

  9. March 31st — Sheng Cheng (UIUC)

    Recording link: https://uofi.box.com/s/acsn0l0dznb8nz48y8et7hrskyhe8xb4

  10. April 7th — Professor Talia Moore (Umich)

    Recording link: https://uofi.box.com/s/8den6asi11r4fzhmxykab4b11jdth5r7

  11. April 14th — Professor Abhishek Gupta (University of Washington)

    Recording link: https://uofi.box.com/s/040luxx1hq176kbucc6qobxxg8shmvuh

  12. April 21st — Professor Kaiqing Zhang (University of Maryland)

    Recording link: https://uofi.box.com/s/pvtisc7fozfa3sqej97snax0qjizkmuo

  13. April 28th — Professor Zackory Erickson (CMU)


4/28/2023

Title: Robot Learning, Sensing, and Teleoperation in Pursuit of Robotic Caregivers

Speaker: Zackory Erickson, CMU

Abstract: Designing safe and reliable robotic assistance for caregiving is a grand challenge in robotics. A sixth of the United States population is over the age of 65 and in 2014 more than a quarter of the population had a disability. Robotic caregivers could positively benefit society; yet, physical robotic assistance presents several challenges and open research questions relating to teleoperation, active sensing, and autonomous control. In this talk, I will present recent techniques and technology that my group has developed towards addressing core challenges in robotic caregiving. First, I will introduce a head-worn interface that enables people with loss of hand function (due to spinal cord injury or neurodegenerative diseases) to teleoperate assistive mobile manipulators. I will then describe capacitive servoing, a new sensing technique for robotic manipulators to sense the human body and track trajectories along the body. Finally, I will present our recent work in robot learning, including policy learning and dynamics modeling, to perform complex manipulation of deformable garments and blankets around the human body.

Bio: Zackory Erickson is an Assistant Professor in The Robotics Institute at Carnegie Mellon University, where he leads the Robotic Caregiving and Human Interaction (RCHI) Lab. His research focuses on developing new robot learning, mobile manipulation, and sensing methods for physical human-robot interaction and healthcare. Zackory received his PhD in Robotics and M.S. in Computer Science from Georgia Tech and B.S. in Computer Science at the University of Wisconsin–La Crosse. His work has won the Best Student Paper Award at ICORR 2019 and a Best Paper in Service Robotics finalist at ICRA 2019.


4/21/2023

Title: Can direct latent model learning solve linear quadratic Gaussian control?

Speaker: Kaiqing Zhang, University of Maryland, College Park

Abstract: We study the task of learning state representations from potentially high-dimensional observations, inspired by its broad applications in robotics and especially robot learning. We pursue a direct latent model learning approach, where a dynamic model in some latent state space is learned by predicting quantities directly related to planning (e.g., costs) without reconstructing the observations. In particular, we focus on an intuitive cost-driven state representation learning method for solving Linear Quadratic Gaussian (LQG) control, one of the most fundamental partially observable control problems. As our main results, we establish finite-sample guarantees of finding a near-optimal state representation function and a near-optimal controller using the directly learned latent model. To the best of our knowledge, despite various empirical successes, prior to this work it was unclear if such a cost-driven latent model learner enjoys finite-sample guarantees. We have also studied the approach of latent model learning in MuZero, a recent breakthrough in empirical reinforcement learning, under our framework of LQG control. Our work underscores the value of predicting multi-step costs, an idea that is key to our theory, and notably also an idea that is known to be empirically valuable for learning state representations.

Bio: Kaiqing Zhang is currently an Assistant Professor at the Department of Electrical and Computer Engineering (ECE) and the Institute for System Research (ISR), at the University of Maryland, College Park. He is also affiliated with the Maryland Robotics Center (MRC). During the deferral time before joining Maryland, he was a postdoctoral scholar affiliated with LIDS and CSAIL at MIT, and a Research Fellow at Simons Institute for the Theory of Computing at Berkeley. He finished his Ph.D. from the Department of ECE and CSL at the University of Illinois at Urbana-Champaign (UIUC). He also received M.S. in both ECE and Applied Math from UIUC, and B.E. from Tsinghua University. His research interests lie broadly in Control and Decision Theory, Game Theory, Robotics, Reinforcement/Machine Learning, Computation, and their intersections. He is the recipient of several awards and fellowships, including Hong, McCully, and Allen Fellowship, Simons-Berkeley Research Fellowship, CSL Thesis Award, and ICML Outstanding Paper.


4/14/2023

Title: How to Train Your Robot: Techniques for Enabling Robotic Learning in the Real World

Speaker: Abhishek Gupta, University of Washington

Abstract: Reinforcement learning has been a powerful tool for building continuously improving systems in domains like video games and animated character control, but has proven relatively more challenging to apply to problems in real world robotics. In this talk, I will argue that this challenge can be attributed to a mismatch in assumptions between typical RL algorithms and what the real world actually provides, making data collection and utilization difficult. In this talk, I will discuss how to build algorithms and systems to bridge these assumptions and allow robotic learning systems to operate under the assumptions of the real world. In particular, I will describe how we can develop algorithms to ensure easily scalable supervision from humans, perform safe, directed exploration in practical time scales by leveraging prior data and enable uninterrupted autonomous data collection at scale. I will show how these techniques can be applied to real world robotic systems and discuss how these have the potential to be applicable more broadly across a variety of machine learning applications. Lastly, I will provide some perspectives on how this opens the door towards future deployment of robots into unstructured human-centric environments.

Bio: Abhishek Gupta is an assistant professor in the Paul G. Allen School for Computer Science and Engineering at the University of Washington. He was formerly a postdoctoral fellow at MIT, working with Pulkit Agrawal and Russ Tedrake. He completed his PhD at UC Berkeley working with Pieter Abbeel and Sergey Levine, building systems that can leverage reinforcement learning algorithms to solve robotics problems. He is interested in research directions that enable directly performing reinforcement learning directly in the real world — reward supervision in reinforcement learning, large scale real world data collection, learning from demonstrations, and multi-task reinforcement learning. He has also spent time at Google Brain and is a recipient of the NDSEG and NSF graduate research fellowships. A more detailed description can be found at https://homes.cs.washington.edu/~abhgupta/.


4/7/2023

Title: Robots for Evolutionary Biology

Speaker: Talia Y. Moore, University of Michigan, Ann Arbor

Abstract: How can biomimetic robots help us in our quest to understand the natural world? And how can examining the evolution and diversity of animal systems help us design better robots? Using a case study of snake-mimicking soft robots, I describe several categories of bio-inspired robotics that serve multiple distinct research goals. By introducing this categorization, I invite us all to consider the many ways in which robotics and biology can serve each other.

Bio: Talia Y. Moore is an Assistant Professor of Robotics and of Mechanical Engineering at the University of Michigan, where she is also affiliated with the Department of Ecology and Evolutionary Biology, and the Museum of Zoology. She examines the biomechanics, evolution, and ecology of animals to create bio-inspired robotic systems and uses these physical models to evaluate biological hypotheses.


3/31/2023

Title: Safe Learning and Control: An ℒ1 Adaptive Control Approach

Speaker: Sheng Cheng, UIUC

Abstract: In recent years, learning-based control paradigms have seen many success stories on various systems and robots. However, as these robots prepare to enter the real world, operating safely in the presence of imperfect model knowledge and external disturbances will be vital to ensure mission success. In the first part of the talk, we present an overview of L1 adaptive control, how it enables safety in autonomous robots, and discuss some of its success stories in the aerospace industry. In the second part of the talk, we present some of our recent results that explore controller tuning using machine learning tools while preserving the controller structure and stability properties. An overview of different projects at our lab that build upon this framework will be demonstrated to show different applications.

Bio: Sheng Cheng received the B.Eng. degree in Control Science and Engineering from the Harbin Institute of Technology in China and the M.S. and Ph.D. degrees in Electrical Engineering from the University of Maryland. He is a Postdoctoral Research Associate with the University of Illinois Urbana–Champaign. His current research interests include aerial robotics, learning-enabled control, adaptive control, and distributed parameter systems.


3/24/2023

Title: Multi-robot Spatial Coordination: Heterogeneity, Learning, and Artistic Expression

Speaker: María Santos, Princeton

Abstract: Multi-robot teams inherent features of redundancy, increased spatial coverage, flexible reconfigurability, and fusion of distributed sensors and actuators make these systems particularly suitable for applications such as precision agriculture, search-and-rescue operations, or environmental monitoring. In such scenarios, coverage control constitutes an attractive coordination strategy for a swarm, since it allows the robots in a team to spread over a domain according to the importance of its regions: the higher the relevance of an area for the objective of the application, the higher the concentration of robots will be. The coverage paradigm typically considers that all the robots can equally contribute to the coverage task and that the coverage objective is fully known prior deployment of the team. In this talk, we consider realistic scenarios where swarms need to simultaneously monitor multiple types of features (e.g. radiation, humidity, temperature) concurrently at different locations, which require a mixture of sensing capabilities too extensive to be designed into every individual robot. This challenge is addressed by considering heterogeneous multi-robot teams, where each robot is equipped with a subset of those sensors as long as, collectively, the team has all the sensor modalities needed to monitor the collection of features in the domain.

Furthermore, we dive into the scenario where robots need to monitor an environment without previous knowledge of its spatial distribution of features. To achieve this, we present an approach where the team simultaneously learns the coverage objectives and optimizes its spatial allocation accordingly, all via local interactions within the team and with the environment. Towards the end of the talk, we move away from the conventional applications of robotic swarms to touch upon how coverage can serve as an interaction modality for artists to effectively utilize robotic swarms for artistic expression. In particular, we focus on the heterogeneous variation of coverage as the means to interactively control desired concentrations of color throughout a canvas for the purpose of artistic multi-robot painting.

Bio: María Santos is a Postdoctoral Research Associate in the Department of Mechanical and Aerospace Engineering at Princeton University, where she works with Dr. Naomi Leonard. María completed her PhD in Electrical and Computer Engineering at the Georgia Institute of Technology in 2020, advised by Dr. Magnus Egerstedt. Prior to that, she received the M.S. degree in Industrial Engineering (Ingeniera Industrial) in 2013 from the University of Vigo, Vigo, Spain and a M.S. degree in Electrical and Computer engineering from the Georgia Institute of Technology, Atlanta, GA, USA, in 2016 as a Fulbright scholar. María’s research deals with the distributed coordination of large multi-agent and multi-robot systems, with a particular focus on modeling heterogeneous teams and the execution of dynamic or unknown tasks. She is also very interested in exploring how to use swarm robotics in various forms of artistic expression, research for which she was awarded a La Caixa Fellowship for Graduate Studies in North America during her doctoral studies.


3/10/2023

Title: Whole-body Robot Skins for Safe and Contact-rich Human-Robot Collaboration

Speaker: Kyungseo Park, UIUC

Abstract: Collaborative robots coexist with humans in unstructured environments and engage in various tasks involving physical interaction through the entire body. To ensure the safety and versatility of these robots, it is desirable to utilize soft tactile sensors that provide mechanical compliance and tactile data simultaneously. The mechanical property of the soft material effectively mitigates the risk of physical contacts, while the tactile data enable active compliance or social interaction. For this reason, many studies have been conducted to develop soft tactile sensors, but their extension to whole-body robot skin is partially hindered by practical limitations such as low scalability and poor durability. Thus, it is worthwhile to devise an optimal approach to implement whole-body robot skin. In this talk, I will present two works on a soft whole-body robot skin for safe and contact-rich human-robot collaboration. Firstly, I will introduce a biomimetic robot skin that imitates the features of human skin, such as protection and multimodal tactile sensation. Then, I will discuss the methods to implement the biomimetic robot skin (i.e., tomography) and demonstrate its capabilities to sense multi-modal tactile data over a large area. Secondly, a soft pneumatic robot skin will also be presented along with its working principle. This robot skin has a simple structure and functionality but has been seamlessly integrated into the robot arm and used to demonstrate safe and intuitive physical human-robot interaction. Finally, I will examine the significances and limitations of these works and discuss how they can be improved.

Bio: Kyungseo Park is a postdoctoral researcher at UIUC Coordinated Science Laboratory, advised by Prof. Joohyung Kim. His research interests include robotics, physical human-robot interaction, and soft robotics. Kyungseo received a B.S., M.S., and Ph.D. in Mechanical Engineering from Korea Advanced Institute for Science and Technology (KAIST), South Korea, in 2016, 2018, and 2022, respectively.


3/3/2023

Title: Toward Safe Learning-based Autonomy with Integrated Perception, Planning, and Control

Speaker: Glen Chou, MIT

Abstract: To deploy robots in unstructured, human-centric environments, we must guarantee their ability to safely and reliably complete tasks. In such environments, uncertainty runs rampant and robots invariably need data to refine their autonomy stack. While machine learning can leverage data to obtain components of this stack, e.g., task constraints, dynamics, and perception modules, blindly trusting these potentially unreliable models can compromise safety. Determining how to use these learned components while retaining unified, system-level guarantees on safety and robustness remains an urgent open problem. In this talk, I will present two lines of research towards achieving safe learning-based autonomy. First, I will discuss how to use human task demonstrations to learn hard constraints which must be satisfied to safely complete that task, and how we can guarantee safety by planning with the learned constraints in an uncertainty-aware fashion. Second, I will discuss how to determine where learned perception and dynamics modules can be trusted, and to what extent. We imbue a motion planner with this knowledge to guarantee safe goal reachability when controlling from high-dimensional observations (e.g., images). We demonstrate that these theoretical guarantees translate to empirical success, in simulation and on hardware.

Bio: Glen Chou is a postdoctoral associate at MIT CSAIL, advised by Prof. Russ Tedrake. His research focuses on end-to-end safety and robustness guarantees for learning-enabled robots. Previously, Glen received an MS and PhD in Electrical and Computer Engineering from the University of Michigan in 2022, and dual B.S. degrees in Electrical Engineering and Computer Science and Mechanical Engineering from UC Berkeley in 2017. He is a recipient of the National Defense Science and Engineering Graduate (NDSEG) fellowship and is an R:SS Pioneer.


2/24/2023

Title: Learning-Based Methods for Multi-Agent Navigation

Speaker: Prof. Amanda Prorok, Cambridge University

Abstract: In this talk, I discuss our work on using Graph Neural Networks (GNNs) to solve multi-agent coordination problems. I begin by describing how we use GNNs to find a decentralized solution by learning what the agents need to communicate to one another. This communication-based policy is able to achieve near-optimal performance; moreover, when combined with an attention mechanism, we can drastically improve generalization to very-large-scale systems. Next, I consider the inverse problem: instead of optimizing agent policies, what if we could modify the navigation environment, instead? Towards that end, I introduce an environment optimization approach that guarantees the existence of complete solutions, improving agent navigation success rates over heuristic methods. Finally, I discuss challenges in the transfer of learned policies to the real world.

Bio: Amanda Prorok is Professor of Collective Intelligence and Robotics in the Department of Computer Science and Technology at Cambridge University, and a Fellow of Pembroke College. She has been honoured by numerous research awards, including an ERC Starting Grant, an Amazon Research Award, the EPSRC New Investigator Award, the Isaac Newton Trust Early Career Award, and several Best Paper awards. Her PhD thesis was awarded the Asea Brown Boveri (ABB) prize for the best thesis at EPFL in Computer Science. She serves as Associate Editor for IEEE Robotics and Automation Letters (R-AL) and Associate Editor for Autonomous Robots (AURO). Prior to joining Cambridge, Amanda was a postdoctoral researcher at the General Robotics, Automation, Sensing and Perception (GRASP) Laboratory at the University of Pennsylvania, USA, where she worked with Prof. Vijay Kumar. She completed her PhD at EPFL, Switzerland, with Prof. Alcherio Martinoli.


2/17/2023

Title: Bridging Safety and Learning in Human-Robot Interaction

Speaker: Andrea Bajcsy, UC Berkeley

Abstract: From autonomous cars in cities to mobile manipulators at home, robots must interact with people. What makes this hard is that human behavior—especially when interacting with other agents—is vastly complex, varying between individuals, environments, and over time. Thus, robots rely on data and machine learning throughout the design process and during deployment to build and refine models of humans. However, by blindly trusting their data-driven human models, today’s robots confidently plan unsafe behaviors around people, resulting in anything from miscoordination to dangerous collisions. My research aims to ensure safety in human-robot interaction, particularly when robots learn from and about people. In this talk, I will discuss how treating robot learning algorithms as dynamical systems driven by human data enables safe human-robot interaction. I will first introduce a Bayesian monitor which infers online if the robot’s learned human model can evolve to well-explain observed human data. I will then discuss how control-theoretic tools enable us to formally quantify what the robot could learn online from human data and how quickly it could learn it. Coupling these ideas with robot motion planning algorithms, I will demonstrate how robots can safely and automatically adapt their behavior based on how trustworthy their learned human models are. I will end this talk by taking a step back and raising the question: “What is the ‘right’ notion of safety when robots interact with people?” and discussing opportunities for how rethinking our notions of safety can capture more subtle aspects of human-robot interaction.

Bio: Andrea Bajcsy is a postdoctoral scholar at UC Berkeley in the Electrical Engineering and Computer Science Department and an incoming Assistant Professor at the Robotics Institute at CMU (starting Fall 2023). She studies safe human-robot interaction, particularly when robots learn from and learn about people. Andrea received her Ph.D. in Electrical Engineering & Computer Science from UC Berkeley and B.S. in Computer Science at the University of Maryland, College Park.


2/10/2023

Title: Learning Meets Gravity: Robots that Embrace Dynamics from Pixels

Speaker: Prof. Shuran Dong, Columbia University

Abstract: Despite the incredible capabilities (speed, repeatability) of their hardware, most robot manipulators today are deliberately programmed to avoid dynamics – moving slow enough so they can adhere to quasi-static assumptions about the world. In contrast, people frequently (and subconsciously) make use of dynamic phenomena to manipulate everyday objects – from unfurling blankets to tossing trash, to improve efficiency and physical reach range. These abilities are made possible by an intuition of physics, a cornerstone of intelligence. How do we impart the same to robots? In this talk, I will discuss how we might enable robots to leverage dynamics for manipulation in unstructured environments. Modeling the complex dynamics of unseen objects from pixels is challenging. However, by tightly integrating perception and action, we show it is possible to relax the need for accurate dynamical models. Thereby allowing robots to (i) learn dynamic skills for complex objects, (ii) adapt to new scenarios using visual feedback, and (iii) use their dynamic interactions to improve their understanding of the world. By changing the way we think about dynamics – from avoiding it to embracing it – we can simplify a number of classically challenging problems, leading to new robot capabilities.

Bio: Shuran Song is an Assistant Professor in the Department of Computer Science at Columbia University. Before that, she received her Ph.D. in Computer Science at Princeton University, BEng. at HKUST. Her research interests lie at the intersection of computer vision and robotics. Song’s research has been recognized through several awards, including the Best Paper Awards at RSS’22 and T-RO’20, Best System Paper Awards at CoRL’21, RSS’19, and finalist at RSS, ICRA, CVPR, and IROS. She is also a recipient of the NSF Career Award, as well as research awards from Microsoft, Toyota Research, Google, Amazon, JP Morgan, and Sloan Foundation. To learn more about Shuran’s work, please visit: https://www.cs.columbia.edu/~shurans/


2/3/2023

Title: Salto-1P, Small Walkers, Spirit 40, and Soon-to-be Robots

Speaker: Prof. Justin Yim, UIUC
 
Abstract: Legged robots show great promise in navigating environments that are difficult for conventional platforms, but they do not yet have the agility, endurance, and related physical ability to deliver on this potential. This talk presents an overview of three robots that address different aspects of legged mobility (organized in increasing numbers of legs). The monoped Salto-1P explores agile leaping to enable a small robot to clear large obstacles, simple bipeds investigate how walking scales to smaller sizes, and perception and control for quadruped Spirit-40 tackle walking through entanglements. Discussion of future directions (and potential avenues for collaboration) concludes the presentation.
 
Bio: Dr. Justin Yim is a new assistant professor at UIUC in the Mechanical Science and Engineering department. He received his Ph.D. in Electrical Engineering from the University of California, Berkeley and his B.S.E. and M.S.E. from the University of Pennsylvania. Prior to starting at UIUC, he was a Computing Research Association CIFellow 2020 postdoc with Aaron Johnson at Carnegie Mellon University. Justin Yim’s research interests are in the design and control of legged robots to improve performance and understand locomotion principles. For his dissertation work developing the jumping monopod robot Salto-1P, he received best paper and best student paper awards at the IEEE/RSJ IROS and IEEE ICRA conferences.
 

1/27/2023

Title: Democratizing Robot Learning and Teaming

Speaker: Prof. Matthew Gombolay, Georgia Tech
 
Abstract: New advances in robotics and autonomy offer a promise of revitalizing final assembly manufacturing, assisting in personalized at-home healthcare, and even scaling the power of earth-bound scientists for robotic space exploration. Yet, in real-world applications, autonomy is often run in the O-F-F mode because researchers fail to understand the human in human-in-the-loop systems. In this talk, I will share exciting research we are conducting at the nexus of human factors engineering and cognitive robotics to inform the design of human-robot interaction. In my talk, I will focus on our recent work on 1) enabling machines to learn skills from and model heterogeneous, suboptimal human decision-makers, 2) “white-box” that knowledge through explainable Artificial Intelligence (XAI) techniques, and 3) scale to coordinated control of stochastic human-robot teams. The goal of this research is to inform the design of autonomous teammates so that users want to turn – and benefit from turning – to the O-N mode.
 
Bio: Dr. Matthew Gombolay is an Assistant Professor of Interactive Computing at the Georgia Institute of Technology. He was named the Anne and Alan Taetle Early-career Assistant Professor in 2018. He received a B.S. in Mechanical Engineering from the Johns Hopkins University in 2011, an S.M. in Aeronautics and Astronautics from MIT in 2013, and a Ph.D. in Autonomous Systems from MIT in 2017. Gombolay’s research interests span robotics, AI/ML, human-robot interaction, and operations research. Between defending his dissertation and joining the faculty at Georgia Tech, Dr. Gombolay served as technical staff at MIT Lincoln Laboratory, transitioning his research to the U.S. Navy and earning a R&D 100 Award. His publication record includes a best paper award from the American Institute for Aeronautics and Astronautics and the ACM/IEEE Conference on Human-Robot Interaction (HRI’22) as well as finalist awards for the best paper at the Conference on Robot Learning (CoRL’20) and best student paper at the American Controls Conference (ACC’20). Dr Gombolay was selected as a DARPA Riser in 2018, received the Early Career Award from the National Fire Control Symposium, and was awarded a NASA Early Career Fellowship.
 

Fall 2022 Schedule

Link to Talk Video Recordings: https://uofi.box.com/s/nqmm8xml2t81kajmjjhuxr414ue3t3ii

  1. September 2nd – Professor Hannah Stuart (UC Berkeley)
    • Director of Embodied Dexterity Group
    • Research Interests: Dexterous manipulation; Bioinspired design; Soft and multi-material mechanisms; Skin contact conditions; Tactile sensing and haptics
  2. September 9th – Ardalan Tajbakhsh (CMU)
  3. September 16th – Professor Sehoon Ha (Georgia Tech)
    • Research Interests: Character animation, robotics, and artificial intelligence
  4. September 23rd – Professor Mark Mueller (UC Berkeley)
    • Director of HiPeR Lab
    • Research Interests: Unmanned Aerial Vehicles, dynamics and control; motion planning and coordination; state estimation and localization.
  5. September 30th – Professor Nikolay Atanasov (UC San Diego)
    • Director of Existential Robotics Laboratory (ERL)
    • Research Interests: Robotics, machine learning, control theory, optimization, and computer vision. Scientific principles for increasing the reliability, efficiency, and versatility of autonomous robot systems.
  6. October 7th – Professor Nima Fazeli (Michigan)
    • Director of MMint Lab
    • Research Interests: Robotic Manipulation, inference and state-estimation, physics-based learning and semantic AI, controls for hybrid systems, contact modeling for robots interactions.
  7. October 14th – Professor Pulkit Agrawal (MIT)
    • Director of Improbable AI Lab
    • Research Interests: Build machines that can automatically and continuously learn about their environment. Computational sensorimotor learning – computer vision, robotics, reinforcement learning.
  8. October 21st – Andreea Bobu (UC Berkeley)
    • PhD student in InterACT Lab
    • Research Interests: Intersection of machine learning, robotics, and human-robot interaction, with a focus in robot learning with uncertainty.
  9. October 28th – Preston Culbertson (Stanford)
    • PhD student in Multi-Robot Systems Lab
    • Research Interests: Adaptive and learning-based control, manipulation and grasping, and multi-agent interaction and coordination (especially without communication).
  10. November 4th – Professor Marynel Vazquez (Yale)
    • Director of Interactive Machines Group
    • Research Interests: Enabling situated multi-party human-robot interactions. Machine learning, artificial intelligence, social psychology, and design.
  11. November 11th – Professor Monroe Kennedy (Stanford)
    • Director of Assistive Robotics and Manipulation Lab
    • Research Interests: Collaborative robotics, specifically the development of theoretical and experimental approaches to enhance robotic autonomy and robotic effectiveness in decentralized tasks toward human-robot collaboration.
  12. November 18th – Professor Nadia Figueroa (UPENN)
    • Faculty member of GRASP Lab
    • Research Interests: Collaborative human-aware robotic systems. Intersection of machine learning, control theory, artificial intelligence, perception, and psychology – with a physical human-robot interaction perspective.
  13. December 2nd – Professor Donghyun Kim (UM Armherst)

12/02/22

Title: Dynamic motion control of legged robots

Speaker: Dr. Donghyun Kim, University of Massachusetts Amherst

Abstract: To accomplish human- and animal-level agility in robotic systems, we must holistically understand robot hardware, real-time controls, dynamics, perception, and motion planning. Therefore, it is crucial to design control architecture fully considering both hardware and software. In this talk, I will explain our approaches to tackle the challenges in classical control (e.g., bandwidth of feedback control, uncertainty, and robustness) and high-level planning (e.g., step planning, perception, and trajectory optimization), and how the hardware limits are reflected in controller formulation. The tested robots are point-foot bipeds (Hume, Mercury), robots using liquid-cooling viscoelastic actuators (Draco), and quadruped robots using proprioceptive actuators (Mini-Cheetah). I will also present our ongoing research about a new point-foot biped robot (Pat) and a guide dog robot.

Speaker’s Bio: Donghyun joined the faculty of the College of Information and Computer Sciences at the University of Massachusetts Amherst as an Assistant Professor in 2021. Before joining UMass, he was a postdoctoral research associate in the Biomimetic Robotics Lab, MIT, from 2019 to 2020. Donghyun was a postdoctoral research associate in the Human-Centered Robotics Lab, the University of Texas at Austin in 2018, where he received his Ph.D. degree in 2017. He holds an MS in Mechanical Engineering from Seoul National University and a BS in Mechanical Engineering from KAIST, Korea. His work on a new viscoelastic liquid-cooled actuator got the best paper award in Transactions on Mechatronics in 2020. His work published in Transactions on Robotics in 2016 was selected as a finalist for the best whole-body control paper and video.


11/11/22

Title: Collaborative Robots in the Wild: Challenges and Future Directions from a Human-Centric Perspective

Speaker: Dr. Nadia Figueroa, University of Pennsylvania

Abstract: Since the 1960’s we have lived with the promise of one day being able to own a robot that would be able to co-exist, collaborate and cooperate with humans in our everyday lives. This promise has motivated a vast amount of research in the last decades on motion planning, machine learning, perception and physical human-robot interaction (pHRI). Nevertheless, we are yet to see a truly collaborative robot navigating and manipulating objects, the environment or physically collaborating with humans and other robots outside of labs and in the human-centric dynamic spaces we inhabit; i.e., “in-the-wild”. This bottleneck is due to a robot-centric set of assumptions of how humans interact and adapt to technology and machines. In this talk, I will introduce a set of more realistic human-centric assumptions and I posit that for collaborative robots to be truly adopted in such dynamic, ever-changing environments they must possess human-like characteristics of reactivity, compliance, safety, efficiency and transparency. Combining these objectives is challenging as providing a single optimal solution can be intractable and even infeasible due to problem complexity and contradicting goals. Hence, I will present possible avenues to achieve these requirements. I will show that by adopting a Dynamical System (DS) based approach for motion planning we can achieve reactive, safe and provably stable robot behaviors while efficiently teaching the robot complex tasks with a handful of demonstrations. Further, I will show that such an approach can be extended to offer task-level reactivity and can be adopted to efficiently and incrementally learn from failures, as humans do. I will also discuss the role of compliance in collaborative robots, the allowance of soft impacts and the relaxation to the standard definition of safety in pHRI and how this can be achieved with DS-based and optimization-based approaches. I will then talk about the importance of both end-users and designers having a holistic understanding of their robot’s behaviors, capabilities, and limitations and present an approach that uses Bayesian posterior sampling to achieve this. The talk will end with a discussion of open challenges and future directions to achieve truly collaborative robots in-the-wild.

Speaker’s Bio: Nadia Figueroa is the Shalini and Rajeev Misra Presidential Assistant Professor in the Mechanical Engineering and Applied Mechanics (MEAM) Department at the University of Pennsylvania. She holds a secondary appointment in the Computer and Information Science (CIS) department and is a faculty advisor at the General Robotics, Automation, Sensing & Perception (GRASP) laboratory. Before joining the faculty, she was a Postdoctoral Associate in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT), advised by Prof. Julie A. Shah. She completed a Ph.D. (2019) in Robotics, Control and Intelligent Systems at the Swiss Federal Institute of Technology in Lausanne (EPFL), advised by Prof. Aude Billard. Prior to this, she was a Research Assistant (2012-2013) at the Engineering Department of New York University Abu Dhabi (NYU-AD) and in the Institute of Robotics and Mechatronics (2011-2012) at the German Aerospace Center (DLR). She holds a B.Sc. degree in Mechatronics (2007) from Monterrey Tech (ITESM-Mexico) and an M.Sc. degree in Automation and Robotics (2012) from the Technical University of Dortmund, Germany.

Her main research interest focuses on developing collaborative human-aware robotic systems: robots that can safely and efficiently interact with humans and other robots in the human-centric dynamic spaces we inhabit. This involves research at the intersection of machine learning, control theory, artificial intelligence, perception, and psychology – with a physical human-robot interaction perspective.


11/11/22

Title: DenseTact: Calibrated Optical Tactile Sensing for the Next Generation of Robotic Manipulation

Speaker: Dr. Monroe Kennedy III, Stanford University

Abstract: Robotic dexterity stands to be the key challenge to making collaborative robots ubiquitous in the home and industry environments, particularly those that require adaptive systems. The last few decades have produced many solutions in this space that include mechanical transducers (pressure sensors) that while effective usually suffer limitations of the resolution, cross-talk, and limited multi-modal sensing at every point. There are passive, soft sensors that through high friction and form-closure envelope items to be manipulated for stable grasps, and while often effective at securing a grasp, such sensors generally do not provide the dexterity needed to re-grasp, perform finger gaiting or truly quantify the stability of a grasp beyond basic immobilization observed through action. Finally, optical tactile sensors have presented many new avenues for research, with leading designs being GelSight and GelSlim for surface reconstruction and force estimation. While optical tactile sensors stand to be robotics best answer so far to sensing sensitivity that approaches anthropomorphic performance, there is still a noticeable gap in robotics research when it comes to performing manipulation tasks, with end-to-end solutions struggling to extend to new complex manipulation tasks without significant (and often unscalable) training.

In this talk, I will present DenseTact an optical tactile sensor that provides calibrated surface reconstruction and forces for a single fingertip. This calibrated, anthropomorphically inspired fingertip design will allow for modularization of the grasping process and open new avenues of research in robotic manipulation towards collaborative robotic applications.

Speaker’s Bio: Monroe Kennedy III is an assistant professor in Mechanical Engineering and courtesy of Computer Science at Stanford University.  Prof. Kennedy is the recipient of the NSF Faculty Early Career Award. He leads the Assistive Robotics and Manipulation laboratory (arm.stanford.edu), which develops collaborative robotic assistants by focusing on combining modeling and control techniques together with machine learning tools. Together, these techniques will improve robotic performance for tasks that are highly dynamic, require dexterity, have considerable complexity, and require human-robot collaboration. He received his Ph.D. in Mechanical Engineering and Applied Mechanics and Masters in Robotics at the University of Pennsylvania and was a member of the GRASP Lab.


11/04/22

Title: Multi-Party Human-Robot Interaction: Towards Generalizable Data-Driven Models with Graph State Abstractions

Speaker: Dr. Marynel Vázquez, Yale University

Abstract: Many real-world applications require that robots handle the complexity of multi-party social encounters, e.g., delivery robots may need to navigate through crowds, robots in manufacturing settings may need to coordinate their actions with those of human coworkers, and robots in educational environments may help multiple people practice and improve their skills. How can we enable robots to effectively take part in these social interactions? At first glance, multi-party interactions may be seen as a trivial generalization of one-on-one human-robot interactions, suggesting no special consideration. Unfortunately, this approach is limited in practice because it ignores higher-order effects, like group factors, that often drive human behavior in multi-party Human-Robot Interaction (HRI).

In this talk, I will describe two research directions that we believe are important to advance multi-party HRI. One direction focuses on understanding group dynamics and social group phenomena from an experimental perspective. The other one focuses on leveraging graph state abstractions and structured, data-driven methods for reasoning about individual, interpersonal and group-level factors relevant to these interactions. Examples of these research directions include efforts to motivate prosocial human behavior in HRI, balance human participation in conversations, and improve spatial reasoning for robots in human environments. As part of this talk, I will also describe our recent efforts to scale HRI data collection for early system development and testing via online interactive surveys. We have begun to explore this idea in the context of social robot navigation but, thanks to advances in game development engines, it could be easily applied to other HRI application domains.

Speaker’s Bio: Marynel Vázquez is an Assistant Professor in Yale’s Computer Science Department, where she leads the Interactive Machines Group. Her research focuses on Human-Robot Interaction (HRI), especially in multi-party and group settings. Marynel is a recipient of the 2022 NSF CAREER Award and two Amazon Research Awards. Her work has been recognized with nominations to Best Paper awards at HRI 2021, IROS 2018, and RO-MAN 2016, as well as a Best Student Paper award at RO-MAN 2022. Prior to Yale, Marynel was a Post-Doctoral Scholar at the Stanford Vision & Learning Lab and obtained her M.S. and Ph.D. in Robotics from Carnegie Mellon University, where she was a collaborator of Disney Research. Before then, she received her bachelor’s degree in Computer Engineering from Universidad Simón Bolívar in Caracas, Venezuela.


10/28/22

Title: Embracing uncertainty: Risk-sensitive and adaptive methods for manipulation and robot navigation

Speaker: Preston Culbertson, California Institute of Technology 

Abstract: As robots continue to move from controlled environments (e.g., assembly lines) into unstructured ones such as roadways, hospitals, and homes, a key open question for roboticists is how to certify the safety of such systems under the wide range of environmental and perceptual conditions robots can encounter in the wild. In this talk, I will argue for a “risk-aware” approach to robot safety, and present methods for robot manipulation and navigation which account for uncertainty through adaptation and risk-awareness. First, I will present a distributed adaptive controller for collaborative manipulation, which allows a team of robots to adapt to parametric uncertainties to move an unknown rigid body along a desired trajectory in SE(3). In the second half of the talk, we will discuss Neural Radiance Fields (NeRFs), a “neural implicit” scene representation that can be generated using only posed RGB images. I will present our recent work leveraging NeRFs for both visual navigation and manipulation, and show how their probabilistic representation of occupancy/object geometry can be used to enable risk-sensitive planning across a variety of problem domains. I will conclude with some broader thoughts on “risk-awareness” and next directions for enabling safety under perceptual uncertainty.

Speaker’s Bio: Preston Culbertson is a postdoctoral scholar in the AMBER Lab at Caltech, researching safe methods for robot planning and control using onboard vision. Preston completed his PhD at Stanford University, working under Prof. Mac Schwager, where his research focused on collaborative manipulation and assembly with teams of robots. In particular, Preston’s research interests include integrating modern techniques for computer vision with methods for robot control and planning that can provide safety guarantees. Preston received the NASA Space Technology Research Fellowship (NSTRF) and the “Best Manipulation Paper” award at ICRA 2018.


10/21/22

Title: Aligning Robot Representations with Humans

Speaker: Andreea Bobu, University of California Berkeley

Abstract: Robots deployed in the real world will interact with many different humans to perform many different tasks in their lifetime, which makes it difficult (perhaps even impossible) for designers to specify all the aspects that might matter ahead of time. Instead, robots can extract these aspects implicitly when they learn to perform new tasks from their users’ input. The challenge is that this often results in representations which pick up on spurious correlations in the data and fail to capture the human’s representation of what matters for the task, resulting in behaviors that do not generalize to new scenarios. Consequently, the representation, or abstraction, of the tasks the human hopes for the robot to perform may be misaligned with what the robot knows. In my work, I explore ways in which robots can align their representations with those of the humans they interact with so that they can more effectively learn from their input. In this talk I focus on a divide and conquer approach to the robot learning problem: explicitly focus human input on teaching robots good representations before using them for learning downstream tasks. We accomplish this by investigating how robots can reason about the uncertainty in their current representation, explicitly query humans for feature-specific feedback to improve it, then use task-specific input to learn behaviors on top of the new representation.

Speaker’s Bio: Andreea Bobu is a Ph.D. candidate at the University of California Berkeley in the Electrical Engineering and Computer Science Department advised by Professor Anca Dragan. Her research focuses on aligning robot and human representations for more seamless interaction between them. In particular, Andreea studies how robots can learn more efficiently from human feedback by explicitly focusing on learning good intermediate human-guided representations before using them for task learning. Prior to her Ph.D. she earned her Bachelor’s degree in Computer Science and Engineering from MIT in 2017. She is the recipient of the Apple AI/ML Ph.D. fellowship, is an R:SS and HRI Pioneer, has won best paper award at HRI 2020, and has worked at NVIDIA Research.


10/14/22

Title: Coming of Age of Robot Learning

Speaker: Dr. Pulkit Agrawal, Massachusetts Institute of Technology 

Abstract: I will discuss our progress in building robotic systems that are agile, dexterous, and real-world-ready in their ability to function in diverse scenarios. The key technical challenge of control in contact-rich problems is addressed using machine learning methods and the results will be illustrated via the following case studies:
(i) a dexterous manipulation system capable of re-orienting novel objects.
(ii) an agile quadruped robot operating on diverse natural terrains.
(iii) a system that only requires a few task demonstrations of an object manipulation task to generalize to new object instances in new configurations.

Speaker’s Bio: Pulkit is the Steven and Renee Finn Chair Assistant Professor in the Department of Electrical Engineering and Computer Science at MIT, where he directs the Improbable AI Lab. His research interests span robotics, deep learning, computer vision, and reinforcement learning. His work has received the Best Paper Award at Conference on Robot Learning 2021 and Best Student Paper Award at Conference on Computer Supported Collaborative Learning 2011. He is a recipient of the Sony Faculty Research Award, Salesforce Research Award, Amazon Research Award, a Fulbright fellowship, etc. He received his Ph.D. from UC Berkeley, Bachelors’s degree from IIT Kanpur, where he was awarded the Directors Gold Medal and co-founded SafelyYou Inc.


10/7/22

Title: Deformable Object Representations and Tactile Control for Contact Rich Robotic Tool-Use

Speaker: Dr. Nima Fazeli, University of Michigan

Abstract: The next generation of robotic systems will be in our homes and workplaces, working in highly unstructured environments and manipulating complex deformables objects. The success of these systems hinges on their ability to reason over the complex dynamics of these objects and carefully control their interactions using the sense of touch. To this end, first I’ll present our recent advances in multimodal neural implicit representations of deformable objects. These representations integrate sight and touch seamlessly to model object deformations and are uniquely well equipped to handle robotic sensing modalities. Second, I’ll present our recent progress on tactile control with high-resolution and highly deformable tactile sensors. Specifically, I’ll discuss our work leveraging the Soft Bubbles to gracefully manipulate tools where we handle high-dimensional tactile signatures and the complex dynamics introduced by the sensor compliance. I’ll end the talk on future directions in tactile controls and deformables and present some of the open challenges in these domains.

Speaker’s Bio: Nima Fazeli is an Assistant Professor of Robotics and Assistant Professor of Mechanical Engineering at the University of Michigan and the director of the Manipulation and Machine Intelligence (MMint) Lab. Prof. Fazeli’s primary research interest is enabling intelligent and dexterous robotic manipulation with emphasis on the tight integration of mechanics, perception, controls, learning, and planning. Prof. Fazeli received his PhD from MIT (2019) working with Prof. Alberto Rodriguez, where he also conducted his postdoctoral training. He received his MSc from the University of Maryland at College Park (2014) where he spent most of his time developing models of the human (and, on occasion, swine) arterial tree for cardiovascular disease, diabetes, and cancer diagnoses. His research has been supported by the Rohsenow Fellowship and featured in outlets such as The New York Times, CBS, CNN, and the BBC.


09/30/22

Title: Multi-Robot Metric-Semantic Mapping

Speaker: Dr. Nikolay Atanasov, University of California San Diego

Abstract: The ability of autonomous robot systems to perform reliably and effectively in real-world settings depends on precise understanding of the geometry and semantics of their environment based on streaming sensor observations. This talk will present estimation techniques for sparse object-level mapping, dense surface-level mapping, and distributed multi-robot mapping. The talk will highlight object shape models, octree and Gaussian process surface models, and distributed inference in time-varying graphs.

Speaker’s Bio: Nikolay Atanasov is an Assistant Professor of Electrical and Computer Engineering at the University of California San Diego, La Jolla, CA, USA. He obtained a B.S. degree in Electrical Engineering from Trinity College, Hartford, CT, USA in 2008 and M.S. and Ph.D. degrees in Electrical and Systems Engineering from University of Pennsylvania, Philadelphia, PA, USA in 2012 and 2015, respectively. His research focuses on robotics, control theory, and machine learning, applied to active perception problems for mobile robots. He works on probabilistic models that unify geometry and semantics in simultaneous localization and mapping (SLAM) and on optimal control and reinforcement learning of robot motion that minimizes uncertainty in these models. Dr. Atanasov’s work has been recognized by the Joseph and Rosaline Wolf award for the best Ph.D. dissertation in Electrical and Systems Engineering at the University of Pennsylvania in 2015, the best conference paper award at the IEEE International Conference on Robotics and Automation (ICRA) in 2017, and an NSF CAREER award in 2021.


09/23/22

Title: Small drones in tight spaces: New vehicle designs and fast algorithms

Speaker: Dr. Mark W. Mueller, University of California at Berkeley 

Abstract: Aerial robotics have become ubiquitous, but (like most robots) they still struggle to operate at high speed in unstructured, cramped environments. I will present some of our group’s recent work on pushing vehicles’ capabilities with two distinct approaches. First, I will present algorithmic work aiming to enable motion planning at high speed through unstructured environments, with a specific focus on standard multicopters. The second approach is to modify the vehicle’s physical characteristics, to create a fundamentally different vehicle for whom the problem is easier to the changed physics. Specific designs presented will include a highly collision resilient drone; a passively morphing drone capable of significantly reducing its size; and a preview of a passively morphing system capable of reducing its aerodynamic loads.

Speaker’s Bio: Mark W. Mueller is an assistant professor of Mechanical Engineering at UC Berkeley where he leads the High Performance Robotics Laboratory (HiPeRLab). His research focuses on the design and control of aerial robots. He joined the mechanical engineering department at UC Berkeley in September 2016, after spending some time at Verity Studios working on a drone entertainment system, installed in the biggest theater on New York’s broadway. He completed his PhD studies at the ETH Zurich in Switzerland in 2015, and received an MSc there in 2011. He received a bachelors degree in mechanical engineering from the University of Pretoria in South Africa.


09/16/22

Title: Learning to walk for real-world missions

Speaker: Dr. Sehoon Ha, Georgia Institute of Technology 

Abstract: Intelligent robot companions have the potential to improve the quality of human life significantly by changing how we live, work, and play. While recent advances in software and hardware opened a new horizon of robotics, state-of-the-art robots are yet far from being blended into our daily lives due to the lack of human-level scene understanding, motion control, safety, and rich interactions. I envision legged robots as intelligent machines beyond simple walking platforms, which can execute a variety of real-world motor tasks in human environments, such as home arrangements, last-mile delivery, and assistive tasks for disabled people. In this talk, I will discuss relevant multi-disciplinary research topics, including deep reinforcement learning, control algorithms, scalable learning pipelines, and sim-to-real techniques.

Speaker’s Bio: Sehoon Ha is currently an assistant professor at the Georgia Institute of Technology. Before joining Georgia Tech, he was a research scientist at Google and Disney Research Pittsburgh. He received his Ph.D. degree in Computer Science from the Georgia Institute of Technology. His research interests lie at the intersection between computer graphics and robotics, including physics-based animation, deep reinforcement learning, and computational robot design. His work has been published at top-tier venues including ACM Transactions on Graphics, IEEE Transactions on Robotics, and International Journal of Robotics Research, nominated as the best conference paper (Top 3) in Robotics: Science and Systems, and featured in the popular media press such as IEEE Spectrum, MIT Technology Review, PBS News Hours, and Wired.


09/09/22

Title: How to Become a Robotics Engineer?

Speaker: Ardalan Tajbakhsh, Carnegie Mellon University

Abstract: In the past few years, the robotics industry has been growing rapidly due to the intersection of technology maturity and market demand. This exponential growth has given rise to many advanced multidisciplinary roles within the industry that often require a unique combination of skills in mathematics, physics, software engineering, and algorithms. While traditional curriculum in robotics broadly covers the foundations of the field, it can be quite challenging for new graduates to effectively focus their preparation towards specific industry roles without feeling overwhelmed. The first part of this talk will focus on providing a clear, actionable, and comprehensive roadmap for becoming a robotics engineer based on the most recent roles in the industry. In the second part, the interview structure for such roles as well as effective interviewing strategies will be presented. This talk is targeted towards undergraduate and graduate students in mechanical, electrical, aerospace, and robotics engineering who are looking to enter the robotics industry in the near future.

Speaker’s Bio: Ardalan Tajbakhsh is currently a PhD candidate at Carnegie Mellon University where he focuses his research on dynamic multi-agent navigation in unstructured environments for real-world applications like warehouse fulfillment, hospital material transportation, environmental monitoring, and disaster recovery. His background spans a healthy mix of academic research and industry experience in robotics. Prior to his PhD, he was a robotics software engineer at Zebra Technologies where he led the algorithm development efforts for multi-robot coordination in warehouse fulfillment. He has previously held other industry roles at iRobot and Virgin Hyperloop One. Ardalan received an undergraduate degree in Mechanical Engineering with honors from UIUC and a masters in Mechanical Engineering with Robotics concentration from CMU.


09/02/22

Title: Embodying Dexterity: Designing for contact in robotic grasping and manipulation systems

Speaker: Dr. Hannah Stuart, University of California at Berkeley

Abstract:  For robots to perform helpful manual tasks, they must be able to physically interact with the real-world. The ability of robots to grasp and manipulate often depends on the strength and reliability of contact conditions, e.g. friction. In this talk, I will introduce how my lab is developing tools for “messy” or adversarial contact conditions to support the design of more capable systems. Coupled with prototyping and experimental exploration, we generate new systems that better embody desired capabilities. In this talk, I will draw upon recent examples including how we are (1) harnessing fluid flow in soft grippers to improve and monitor grasp state in unique ways, (2) modeling granular interaction forces to support new capabilities in sand, and (3) exploring assistive wearable device topologies for collaborative grasping. Specifically, for compute-and-power-limited robots, I will present a lightweight model selection algorithm that learns when a robot should exploit low-latency on-board computation, or, when highly uncertain, query a more accurate cloud model. Then, I will present a collaborative learning algorithm that allows a diversity of robots to mine their real-time sensory streams for valuable training examples to send to the cloud for model improvement. I will conclude this talk by describing my group’s research efforts to co-design the representation of rich robotic sensory data with networked inference and control tasks for concise, task-relevant representations.

Speaker’s Bio:  Dr. Hannah Stuart is the Don M. Cunningham Assistant Professor in the Department of Mechanical Engineering at the University of California at Berkeley. She received her BS in Mechanical Engineering at the George Washington University in 2011, and her MS and PhD in Mechanical Engineering at Stanford University in 2013 and 2018, respectively. Recent awards include the NASA Early Career Faculty grant and Johnson & Johnson Women in STEM2D grant.


Spring Semester 2022:


04/29/22 – Guest Talk

Title: Distributed Perception and Learning Between Robots and the Cloud

Speaker: Dr. Sandeep Chinchali, University of Texas, Austin

Abstract: Augmenting robotic intelligence with cloud connectivity is considered one of the most promising solutions to cope with growing volumes of rich robotic sensory data and increasingly complex perception and decision-making tasks. While the benefits of cloud robotics have been envisioned long before, there is still a lack of flexible methods to trade-off the benefits of cloud computing with end-to-end systems costs of network delay, cloud storage, human annotation time, and cloud-computing time. To address this need, I will introduce decision-theoretic algorithms that allow robots to significantly transcend their on-board perception capabilities by using cloud computing, but in a low-cost, fault-tolerant manner. The utility of these algorithms will be demonstrated on months of field data and experiments on state-of-the-art embedded deep learning hardware.

Specifically, for compute-and-power-limited robots, I will present a lightweight model selection algorithm that learns when a robot should exploit low-latency on-board computation, or, when highly uncertain, query a more accurate cloud model. Then, I will present a collaborative learning algorithm that allows a diversity of robots to mine their real-time sensory streams for valuable training examples to send to the cloud for model improvement. I will conclude this talk by describing my group’s research efforts to co-design the representation of rich robotic sensory data with networked inference and control tasks for concise, task-relevant representations.

Speaker’s Bio: Sandeep Chinchali is an assistant professor in UT Austin’s ECE department and Robotics Consortium. He completed his PhD in computer science at Stanford, working on distributed perception and learning between robots and the cloud. Previously, he was the first principal data scientist at Uhana Inc. (acquired by VMWare), a Stanford startup working on data-driven optimization of cellular networks. Prior to Stanford, he graduated from Caltech, where he worked on robotics at NASA’s Jet Propulsion Lab (JPL). His paper on cloud robotics was a finalist for best student paper at Robotics: Science and Systems and his research has been funded by Cisco, NSF, the Office of Naval Research, and Lockheed Martin.

04/22/22 – Guest Talk

Title: Learning to Walk via Rapid Adaptation

Speaker: Ashish Kumar, Ph.D Student University of California, Berkeley

Abstract:Legged locomotion is commonly studied and programmed as a
discrete set of structured gait patterns, like walk, trot, gallop. However,
studies of children learning to walk (Adolph et al) show that real-world
locomotion is often quite unstructured and more like “bouts of intermittent
steps”. We have developed a general approach to walking which is built on
learning on varied terrains in simulation and then fast online adaptation
(fractions of a second) in the real world. This is made possible by our
Rapid Motor Adaptation (RMA) algorithm. RMA consists of two components: a
base policy and an adaptation module, both of which can be trained in
simulation. We thus learn walking policies that are much more flexible and
adaptable. In our set-up gaits emerge as a consequence of minimizing energy
consumption at different target speeds, consistent with various animal
motor studies.

You can see our robot walking at here
The project page is: here

04/15/22 – Guest Talk

Title: Trust in Multi-Robot Systems and Achieving Resilient Coordination

Speaker: Dr. Stephanie Gill, Harvard University

Abstract:Our understanding of multi-robot coordination and control has experienced great advances to the point where deploying multi-robot systems in the near future seems to be a feasible reality. However, many of these algorithms are vulnerable to non-cooperation and/or malicious attacks that limit their practicality in real-world settings. An example is the consensus problem where classical results hold that agreement cannot be reached when malicious agents make up more than half of the network connectivity; this quickly leads to limitations in the practicality of many multi-robot coordination tasks. However, with the growing prevalence of cyber-physical systems comes novel opportunities for detecting attacks by using cross-validation with physical channels of information. In this talk we consider the class of problems where the probability of a particular (i,j) link being trustworthy is available as a random variable. We refer to these as “stochastic observations of trust.” We show that under this model, strong performance guarantees such as convergence for the consensus problem can be recovered, even in the case where the number of malicious agents is greater than ½ of the network connectivity and consensus would otherwise fail. Moreover, under this model we can reason about the deviation from the nominal (no attack) consensus value and the rate of achieving consensus. Finally, we make the case for the importance of deriving such stochastic observations of trust for cyber-physical systems and we demonstrate one such example for the Sybil Attack that uses wireless communication channels to arrive at the desired observations of trust. In this way our results demonstrate the promise of exploiting trust in multi-robot systems to provide a novel perspective on achieving resilient coordination in multi-robot systems.

Speaker’s Bio: Stephanie is an Assistant Professor in the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University. Her work centers around trust and coordination in multi-robot systems for which she has received the Office of Naval Research Young Investigator award (2021) and the National Science Foundation CAREER award (2019). She has also been selected as a 2020 Sloan Research Fellow for her contributions at the intersection of robotics and communication. She has held a Visiting Assistant Professor position at Stanford University during the summer of 2019, and an Assistant Professorship at Arizona State University from 2018-2020. She completed her Ph.D. work (2014) on multi-robot coordination and control and her M.S. work (2009) on system identification and model learning. At MIT she collaborated extensively with the wireless communications group NetMIT, the result of which were two U.S. patents recently awarded in adaptive heterogeneous networks for multi-robot systems and accurate indoor positioning using Wi-Fi. She completed her B.S. at Cornell University in 2006.

04/08/22 – Student Talks

Title: Underwater Vehicle Navigation and Pipeline Inspection using Fuzzy Logic

Speaker: I-Chen Sang, University of Illinois at Urbana-Champaign

Abstract:Underwater pipeline inspection is becoming a crucial topic in the off-shore subsea inspection industry. ROVs (Remotely Operated Vehicle) can play an important role in various fields like military, ocean science, aquaculture, shipping, and energy. However, using ROVs for inspection is not cost effective and the fixed leak detection sensors mounted along the pipeline have limited precision. Therefore, we proposed a navigation system using AUV (Autonomous Underwater Vehicle) to increase the position resolution of leak detection and lower the inspection cost. In a ROS/Gazebo-based simulation environment, we navigated the AUV with a fuzzy controller which takes navigation errors derived from both camera and sonar sensors as input. When released away from the pipeline, the AUV has the ability to navigate towards the pipeline and finally cruise along it. Additionally, with a chemical concentration sensor mounted on the AUV, it showed the capability to complete pipeline inspection and report the leak point.

Speaker’s Bio:I am a Ph.D. student at the Department of Industrial and Systems Engineering and started to work in AUVSL in Jan 2021. I hold a B.Sc. and M.Sc. degree in physics and 5-year working experience in defense industry before joining U of I. My current concentration is in systems design and manufacturing. The focus of my research is on the perception algorithm development of autonomous vehicles. I am currently working on ground vehicle lane detection using adaptive thresholding algorithms.

Title: A CNN Based Vision-Proprioception Fusion Method for Robust UGV Terrain Classification

Speaker: Yu Chen, University of Illinois at Urbana-Champaign

Abstract: The ability for ground vehicles to identify terrain types and characteristics can help provide more accurate localization and information-rich mapping solutions. Previous studies have shown the possibility of classifying terrain types based on proprioceptive sensors that monitor wheel-terrain interactions. However, most methods only work well when very strict motion restrictions are imposed including driving in a straight path with constant speed, making them difficult to be deployed on real-world field robotic missions. To lift this restriction, this paper proposes a fast, compact, and motion-robust, proprioception-based terrain classification method. This method uses common on-board UGV sensors and a 1D Convolutional Neural Network (CNN) model. The accuracy of this model was further improved by fusing it with a vision-based CNN that made classification based on the appearance of terrain. Experimental results indicated the final fusion models were highly robust with strong performance, with over 93% accuracy, under various lighting conditions and motion maneuvers.

Yu Chen‘s Bio: I am a Ph.D. student in U of I who majors in Mechanical Science and Engineering and works under the leadership of Prof. William Robert Norris and Prof. Elizabeth T Hsiao-Wecksler. I have gained my fair share of knowledge in manufacturing, mechanical design, and structural analysis during my undergraduate days in Michigan State. For my graduate studies, I am focusing my interest and energy on studying robot perception and dynamic control. Currently, I am working on developing efficient CNN fusion models to help robots gain higher accuracy and robustness when classifying terrain types and detecting obstacles.

04/01/22 – Guest Talk

Title: Bridging the Gap Between Safety and Real-Time Performance during Trajectory Optimization: Reachability-based Trajectory Design

Speakers: Ram Vasudevan, University of Michigan

Abstract:Autonomous systems offer the promise of providing greater safety and access. However, this positive impact will only be achieved if the underlying algorithms that control such systems can be certified to behave robustly. This talk describes a technique called Reachability-based Trajectory Design, which constructs a parameterized representation of the forward reachable set that it then uses in concert with predictions to enable real-time, certified, collision checking. This approach, which is guaranteed to generate not-at-fault behavior, is demonstrated across a variety of different real-world platforms including ground vehicles, manipulators, and walking robots.

Speaker’s Bio:Ram Vasudevan is an assistant professor in Mechanical Engineering and the Robotics Institute at the University of Michigan. He received a BS in Electrical Engineering and Computer Sciences, an MS degree in Electrical Engineering, and a PhD in Electrical Engineering all from the University of California, Berkeley. He is a recipient of the NSF CAREER Award, the ONR Young Investigator Award, and the 1938E Award . His work has received best paper awards at the IEEE Conference on Robotics and Automation, the ASME Dynamics Systems and Controls Conference, and IEEE OCEANS Conference and has been finalist for best paper at Robotics: Science and Systems.

03/25/22 – Guest Talk

Title: Toward the Development of Highly Adaptive Legged Robots

Speakers: Quan Nguyen, University of Southern California

Abstract:Deploying legged robots in real-world applications will require fast adaptation to unknown terrain and model uncertainty. Model uncertainty could come from unknown robot dynamics, external disturbances, interaction with other humans or robots, or unknown parameters of contact models or terrain properties. In this talk, I will first present our recent works on adaptive control and adaptive safety-critical control for legged locomotion adapting to substantial model uncertainty. In these results, we focus on the application of legged robots walking rough terrain while carrying a heavy load. I will then talk about our solution on trajectory optimization that allows legged robots to adapt to a wide variety of challenging terrain. This talk will also discuss the combination of control, trajectory optimization and reinforcement learning toward achieving long-term adaptation in both control actions and trajectory planning for legged robots.

Speaker’s Bio: Quan Nguyen is an Assistant Professor of Aerospace and Mechanical Engineering at the University of Southern California. Prior to joining USC, he was a Postdoctoral Associate in the Biomimetic Robotics Lab at the Massachusetts Institute of Technology (MIT). He received his Ph.D. from Carnegie Mellon University (CMU) in 2017 with the Best Dissertation Award.

His research interests span different control and optimization approaches for highly dynamic robotics including nonlinear control, trajectory optimization, real-time optimization-based control, robust and adaptive control. His work on the bipedal robot ATRIAS walking on stepping stones was featured on the IEEE Spectrum, TechCrunch, TechXplore and Digital Trends. His work on the MIT Cheetah 3 robot leaping on a desk was featured widely in many major media channels, including CNN, BBC, NBC, ABC, etc. Nguyen won the Best Presentation of the Session at the 2016 American Control Conference (ACC) and the Best System Paper Finalist at the 2017 Robotics: Science & Systems conference (RSS). Nguyen is a recipient of the 2020 Charles Lee Powell Foundation Faculty Research Award.

03/11/22 – Guest Talk

Title: Developing and Deploying Situational Awareness in Autonomous Robotic Systems

Speakers: Philip Dames, Temple University

Abstract: Robotic systems must possess sufficient situational awareness in order to successfully operate in complex and dynamic real-world environments, meaning they must be able to perceive objects in their surroundings, comprehend their meaning, and predict the future state of the environment. In this talk, I will first describe how multi-target tracking (MTT) algorithms can provide mobile robots with this awareness, including our recent results that extend classical MTT approaches to include semantic object labels. Next, I will discuss two key applications of MTT to mobile robotics. The first problem is distributed target search and tracking. To solve this, we develop a distributed MTT framework, allowing robots to estimate, in real time, the relative importance of each portion of the environment, and dynamic tessellation schemes, which account for uncertainty in the pose of each robot, provide collision avoidance, and automatically balance task assignment in a heterogeneous team. The second problem is autonomous navigation through crowded, dynamic environments. To solve this, we develop a novel neural network-based control policy that takes as its input the target tracks from an MTT, unlike previous approaches which only rely on raw sensor data. We show that our policy, trained entirely in one simulated environment, generalizes well to new situations, including a real-world robot.

Speaker’s Bio:Philip Dames is an Assistant Professor of Mechanical Engineering at Temple University, where he directs the Temple Robotics and Artificial Intelligence Lab (TRAIL). Prior to joining Temple, he was a Postdoctoral Researcher in Electrical and Systems Engineering at the University of Pennsylvania. He received his PhD Mechanical Engineering and Applied Mechanics from the University of Pennsylvania in 2015 and his BS and MS degrees in Mechanical Engineering from Northwestern University in 2010.

Title: Pedestrian trajectory prediction meets social robot navigation

Abstract: Multi-pedestrian trajectory prediction is an indispensable element of safe and socially aware robot navigation among crowded spaces. Previous works assume that positions of all pedestrians are consistently tracked, which leads to biased interaction modeling if the robot has a limited sensor range and can only partially observe the pedestrians. We propose Gumbel Social Transformer, in which an Edge Gumbel Selector samples a sparse interaction graph of partially detected pedestrians at each time step. A Node Transformer Encoder and a Masked LSTM encode pedestrian features with sampled sparse graphs to predict trajectories. We demonstrate that our model overcomes potential problems caused by the assumptions, and our approach outperforms related works in trajectory prediction benchmarks.

Then, we redefine the personal zones of walking pedestrians with their future trajectories. To learn socially aware robot navigation policies, the predicted social zones are incorporated into a reinforcement learning framework to prevent the robot from intruding into the social zones. We propose a novel recurrent graph neural network with attention mechanisms to capture the interactions among agents through space and time. We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios in simulation and real world.

Speakers’ Bios:  Shuijing Liu is a fourth year PhD student in Human-Centered Autonomy Lab in Electrical and Computer Engineering, University of Illinois at Urbana Champaign, advised by Professor Katherine Driggs-Campbell. Her research interests include learning-based robotics and human robot interaction. Her research primarily focuses on autonomous navigation in crowded and interactive environments using reinforcement learning.

Zhe Huang is a third year PhD student in Human-Centered Autonomy Lab in Electrical and Computer Engineering, University of Illinois at Urbana Champaign, advised by Professor Katherine Driggs-Campbell. His research is focused on pedestrian trajectory prediction and collaborative manufacturing.

03/04/22 – Student Talks

Title: GRILC: Gradient-based Reprogrammable Iterative Learning Control for Autonomous Systems 

Speakers:  Kuan-Yu Tseng , University of Illinois at Urbana-Champaign

Abstract: We propose a novel gradient-based reprogrammable iterative learning control (GRILC) framework for autonomous systems. Performance of trajectory following in autonomous systems is often limited by mismatch between a complex actual model and a simplified nominal model used in controller design. To overcome this issue, we develop the GRILC framework with offline optimization using the information of the nominal model and the actual trajectory, and online system implementation. In addition, a partial and reprogrammable learning strategy is introduced. The proposed method is applied to the autonomous time-trialing example and the learned control policies can be stored into a library for future motion planning. The simulation and experimental results illustrate the effectiveness and robustness of the proposed approach.

Speaker’s Bio: Kuan-Yu Tseng is a third-year Ph.D. student in Mechanical Engineering at UIUC, advised by Prof. Geir Dullerud. He received M.S. and B.S. degrees in Mechanical Engineering from National Taiwan University in 2019 and 2017, respectively. His research interests include control and motion planning in autonomous vehicles and robots.

Title: Pedestrian trajectory prediction meets social robot navigation

Abstract: Multi-pedestrian trajectory prediction is an indispensable element of safe and socially aware robot navigation among crowded spaces. Previous works assume that positions of all pedestrians are consistently tracked, which leads to biased interaction modeling if the robot has a limited sensor range and can only partially observe the pedestrians. We propose Gumbel Social Transformer, in which an Edge Gumbel Selector samples a sparse interaction graph of partially detected pedestrians at each time step. A Node Transformer Encoder and a Masked LSTM encode pedestrian features with sampled sparse graphs to predict trajectories. We demonstrate that our model overcomes potential problems caused by the assumptions, and our approach outperforms related works in trajectory prediction benchmarks.

Then, we redefine the personal zones of walking pedestrians with their future trajectories. To learn socially aware robot navigation policies, the predicted social zones are incorporated into a reinforcement learning framework to prevent the robot from intruding into the social zones. We propose a novel recurrent graph neural network with attention mechanisms to capture the interactions among agents through space and time. We demonstrate that our method enables the robot to achieve good navigation performance and non-invasiveness in challenging crowd navigation scenarios in simulation and real world.

Speakers’ Bios:  Shuijing Liu is a fourth year PhD student in Human-Centered Autonomy Lab in Electrical and Computer Engineering, University of Illinois at Urbana Champaign, advised by Professor Katherine Driggs-Campbell. Her research interests include learning-based robotics and human robot interaction. Her research primarily focuses on autonomous navigation in crowded and interactive environments using reinforcement learning.

Zhe Huang is a third year PhD student in Human-Centered Autonomy Lab in Electrical and Computer Engineering, University of Illinois at Urbana Champaign, advised by Professor Katherine Driggs-Campbell. His research is focused on pedestrian trajectory prediction and collaborative manufacturing.

02/25/22 – Guest Talk

Title: Safety and Generalization Guarantees for Learning-Based Control of Robots

Speakers:  Professor Matthew Spenko , Illlinois Institute of Technology

Abstract: For the past fifteen years, the RoboticsLab@IIT has focused on creating technologies to enable mobility in challenging environments. This talk highlights the lab’s contributions, from its work in gecko-inspired climbing and perching robots to the evaluation of navigation safety in self-driving cars, drone technology for tree science, and the development of amoeba-like soft robots. The latter of these, soft robots, will compose the majority of the talk. Soft robots can offer many advantages over traditional rigid robots including conformability to different object geometries, shape changing, safer physical interaction with humans, the ability to handle delicate objects, and grasping without the need for high-precision control algorithms. Despite these advantages, soft robots often lack high force capacity, scalability, responsive locomotion and object handling, and a self-contained untethered design, all of which have hindered their adoption. To address these issues, we have developed a series of robots comprised of several rigid robotic subunits that are flexibly connected to each other and contain a granule-filled interior that enables a jamming transition from soft to rigid. The jamming feature allows the robots to exert relatively large forces on objects in the environment. The modular design resolves any scalability issues, and using decentralized robotic subunits allows the robot to configure itself in a variety of shapes and conform to objects, all while locomoting. The result is a compliant, high-degree-of-freedom system with excellent morphability.

Speaker’s Bio: Matthew Spenko is a professor in the Mechanical, Materials, and Aerospace Engineering Department at the Illinois Institute of Technology. Prof. Spenko earned the B.S. degree cum laude in Mechanical Engineering from Northwestern University in 1999 and the M.S. and Ph.D. degrees in Mechanical Engineering from Massachusetts Institute of Technology in 2001 and 2005 respectively. He was an Intelligence Community Postdoctoral Scholar in the Mechanical Engineering Department’s Center for Design Research at Stanford University from 2005 to 2007. He has been a faculty member at the Illinois Institute of Technology since 2007, received tenure in 2013, and was promoted to full professor in 2019. His research is in the general area of robotics with specific attention to mobility in challenging environments. Prof. Spenko is a senior member of IEEE and an associate editor of Field Robotics. His work has been featured in popular media such as the New York Times, CNET, Engadget, and Discovery-News. Examples of his robots are on permanent display in Chicago’s Museum of Science and Industry.

02/18/22 – Guest Talk

Title: Safety and Generalization Guarantees for Learning-Based Control of Robots

Speakers: Prof. Anirudha Majumdar , Assistant Professor, Princeton University

Abstract: The ability of machine learning techniques to process rich sensory inputs such as vision makes them highly appealing for use in robotic systems (e.g., micro aerial vehicles and robotic manipulators). However, the increasing adoption of learning-based components in the robotics perception and control pipeline poses an important challenge: how can we guarantee the safety and performance of such systems? As an example, consider a micro aerial vehicle that learns to navigate using a thousand different obstacle environments or a robotic manipulator that learns to grasp using a million objects in a dataset. How likely are these systems to remain safe and perform well on a novel (i.e., previously unseen) environment or object? How can we learn control policies for robotic systems that provably generalize to environments that our robot has not previously encountered? Unfortunately, existing approaches either do not provide such guarantees or do so only under very restrictive assumptions.

In this talk, I will present our group’s work on developing a principled theoretical and algorithmic framework for learning control policies for robotic systems with formal guarantees on generalization to novel environments. The key technical insight is to leverage and extend powerful techniques from generalization theory in theoretical machine learning. We apply our techniques on problems including vision-based navigation and grasping in order to demonstrate the ability to provide strong generalization guarantees on robotic systems with complicated (e.g., nonlinear/hybrid) dynamics, rich sensory inputs (e.g., RGB-D), and neural network-based control policies.

Speaker’s Bio: Anirudha Majumdar is an Assistant Professor at Princeton University in the Mechanical and Aerospace Engineering (MAE) department, and an Associated Faculty in the Computer Science department. He received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2016, and a B.S.E. in Mechanical Engineering and Mathematics from the University of Pennsylvania in 2011. Subsequently, he was a postdoctoral scholar at Stanford University from 2016 to 2017 at the Autonomous Systems Lab in the Aeronautics and Astronautics department. He is a recipient of the NSF CAREER award, the Google Faculty Research Award (twice), the Amazon Research Award (twice), the Young Faculty Researcher Award from the Toyota Research Institute, the Best Conference Paper Award at the International Conference on Robotics and Automation (ICRA), the Paper of the Year Award from the International Journal of Robotics Research (IJRR), the Alfred Rheinstein Faculty Award (Princeton), and the Excellence in Teaching Award from Princeton’s School of Engineering and Applied Science.

Schedule: The seminar will be held every Friday at 1:00 PM CST starting 2/11.

Location: This semester, we will meet only virtually.

02/11/22 – Guest Talk

Title: Numerical Methods for Things That Move

Speakers: Zac Manchester , Assistant Professor, Carnegie Mellon University

Abstract: Recent advances in motion planning and control have led to dramatic successes like SpaceX’s autonomous rocket landings and Boston Dynamics’ humanoid robot acrobatics. However, the underlying numerical methods used in these applications are typically decades old, not tuned for high performance on planning and control problems, and are often unable to cope with the types of optimization problems that arise naturally in modern robotics applications like legged locomotion and autonomous driving. This talk will introduce new numerical optimization tools built to enable robotic systems that move with the same agility, efficiency, and safety as humans and animals. Some target applications include legged locomotion; autonomous driving; distributed control of satellite swarms; and spacecraft entry, descent, and landing. I will also discuss hardware platforms that we have deployed algorithms on, including quadrupeds, teams of quadrotors, and tiny satellites.

Speaker’s Bio: Zac Manchester is an Assistant Professor of Robotics at Carnegie Mellon University, founder of the KickSat project, and member of the Breakthrough Starshot Advisory Committee. He holds a Ph.D. in aerospace engineering and a B.S. in applied physics from Cornell University. Zac was a postdoc in the Agile Robotics Lab at Harvard University and previously worked at Stanford, NASA Ames Research Center and Analytical Graphics, Inc. He received a NASA Early Career Faculty Award in 2018 and has led three satellite missions. His research interests include motion planning, control, and numerical optimization, particularly with application to robotic locomotion and spacecraft guidance, navigation, and control.

Schedule: The seminar will be held every Friday at 1:00 PM CST starting 2/11.

Location: This semester, we will meet only virtually.

12/10/21 – Guest Talk

Title: Towards a Universal Modeling and Control Framework for Soft Robots

Speakers: Daniel Bruder , Harvard

Abstract: Soft robots have been an active area of research in the robotics community due to their inherent compliance and ability to safely interact with delicate objects and the environment. Despite their suitability for tasks involving physical human-robot interaction, their real-world applications have been limited due to the difficulty
involved in modeling and controlling soft robotic systems. In this talk, I’ll describe two modeling approaches aimed at overcoming the limitations of previous methods. The first is a physics-based approach for fluid-driven actuators that offers predictions in terms of tunable geometrical parameters, making it a valuable tool in the design of soft fluid-driven robotic systems. The second is a data-driven approach that leverages Koopman operator theory to construct models that are linear, which enables the utilization of linear control techniques for nonlinear dynamical systems like soft robots. Using this Koopman-based approach, a pneumatically actuated soft continuum manipulator was able to autonomously perform manipulation tasks such as trajectory following and pick-and-place with a variable payload without undergoing any task-specific training. In the future, these
approaches could offer a paradigm for designing and controlling all soft robotic systems, leading to their more widespread adoption in real-world applications.

Bios: Daniel Bruder received a B.S. degree in engineering sciences from Harvard University in 2013, and a Ph.D. degree in mechanical engineering from the University of Michigan in 2020. He is currently a postdoctoral researcher in the Harvard Microrobotics Lab. He is a recipient of the NSF Graduate Research Fellowship and the Richard and Eleanor Towner Prize for Outstanding Ph.D. Research. His research interests include the design, modeling, and control of robotic systems, especially soft robots.

11/19/21 – Guest Talk

Title: Value function-based methods for safety-critical control

Speakers: Jason Jangho Choi, University of California, Berkeley

Abstract: Many safety-critical control methods leverage on a value function that captures knowledge about how the safety constraint can be dynamically satisfied. These value functions appear in many different forms in various literature—for example, Hamilton-Jacobi Reachability, Control Barrier Functions, and Reinforcement Learning. The value functions are often computationally heavy, however, once they are computed offline, they can be used effectively fast for online applications. In the first part of my talk, I will share some recent progress in methods for constructing the value functions. Specifically, I would like to discuss how different notions of value functions can be merged into a unified concept, and will introduce a new dynamic programming principle that can effectively compute reachability value functions for hybrid systems like walking robots. In the second part, I will discuss the main issue when value functions computed offline are deployed in online safety control, model uncertainty, and how we can address this problem effectively with data-driven methods.

Bios: Jason Jangho Choi is a PhD student at the University of California, Berkeley, working with Professor Koushil Sreenath and Professor Claire Tomlin. He finished his undergraduate studies in mechanical engineering at Seoul National University. His research interests center on optimal control theories for nonlinear and hybrid systems, data-driven methods for safety control, and their applications to robotics mobility.

11/12/21 – Guest Talk

Title: Planning and Learning for Maneuvering Mobile Robots in Complex Environments

Speakers: Lantao Liu, Assistant Professor in the Department of Intelligent Systems Engineering at Indiana University-Bloomington

Abstract: In the first part of the talk, I will discuss our recent progress on the continuous-state Markov Decision Processes (MDPs) that can be utilized to address autonomous navigation and control in unstructured off-road environments. Our solution integrates a diffusion-type approximation to the robot stochastic transition model and a kernel-type approximation to the robot state values, so that the decision can be efficiently computed for real-time navigation. Results from unmanned ground vehicles demonstrate the applicability in challenging real-world environments. Then I will discuss the decision making with time-varying disturbances, the solution of which can navigate unmanned aerial vehicles disturbed by air turbulence or unmanned underwater vehicles disturbed by ocean currents.  We explore the time-varying stochasticity of robot motion and investigate robot state reachability, based on which we design an efficient iterative method that offers a good trade-off between solution optimality and time complexity. Finally, I will present an adaptive sampling (active learning) and informative planning framework for fast modeling (mapping) unknown environments such as large ocean floors or time-varying air/water pollution. We consider real-world constraints such as multiple mission objectives as well as environmental dynamics. Preliminary results from an unmanned surface vehicle also demonstrate high efficiency of the method.

Bios: Lantao Liu is an Assistant Professor in the Department of Intelligent Systems Engineering at Indiana University-Bloomington. His main research interests include robotics and artificial intelligence. He has been working on planning, learning, and coordination techniques for autonomous systems (air, ground, sea) involving single or multiple robots with potential applications in navigation and control, surveillance and security, search and rescue, smart transportation, as well as environmental monitoring. Before joining Indiana University, he was a Research Associate in the Department of Computer Science at the University of Southern California during 2015 – 2017. He also worked as a Postdoctoral Fellow in the Robotics Institute at Carnegie Mellon University during 2013 – 2015. He received a Ph.D. from the Department of Computer Science and Engineering at Texas A&M University in 2013, and a Bachelor’s degree from the Department of Automatic Control at Beijing Institute of Technology in 2007.

11/05/21 – Faculty Talks

Title: Resilience of Autonomous Systems: A Step Beyond Adaptation

Speakers: Melkior Ornik  , Assistant professor in the Department of Aerospace Engineering, UIUC

Abstract: The ability of a system to correctly respond to a sudden adverse event is critical for high-level autonomy in complex, changing, or remote environments. By assuming continuing structural knowledge about the system, classical methods of adaptive or robust control largely attempt to design control laws which enable the system to complete its original task even after an adverse event. However, catastrophic events such as physical system damage may simply render the original task impossible to complete. In that case, design of any control law that attempts to complete the task is doomed to be unsuccessful. Instead, the system should recognize the task as impossible to complete, propose an alternative that is certifiably completable given the current knowledge, and formulate a control law that drives the system to complete this new task. To do so, in this talk I will present the emergent twin frameworks of quantitative resilience and guaranteed reachability. Combining methods of optimal control, online learning, and reachability analysis, these frameworks first compute a set of temporal tasks completable by all systems consistent with the current partial knowledge, possibly within a time budget. These tasks can then be pursued by online learning and adaptation methods. The talk will consider three scenarios: actuator degradation, loss of control authority, and structural change in system dynamics, and will briefly present a number of applications to maritime and aerial vehicles as well as opinion dynamics. Finally, I will identify promising future directions of research, including real-time safety-assured mission planning, resilience of physical infrastructure, and perception-based task assignment.

Bios: Melkior Ornik is an assistant professor in the Department of Aerospace Engineering at the University of Illinois Urbana-Champaign, also affiliated with the Coordinated Science Laboratory, Department of Electrical and Computer Engineering and the Discovery Partners Institute. He received his Ph.D. degree from the University of Toronto in 2017. His research focuses on developing theory and algorithms for control, learning and task planning in autonomous systems that operate in uncertain, changing, or adversarial environments, as well as in scenarios where only limited knowledge of the system is available.

Title: Data-driven MPC: Applications and Tools

Speakers: William Edwards  , UIUC

Abstract: Many of the most exciting and challenging applications in robotics involve control of novel systems with unknown nonlinear dynamics.  When such systems are infeasible to model analytically or numerically, roboticists often turn to data-driven techniques for modelling and control.  This talk will cover two projects relating to this theme.  First, I will discuss an application of data-driven modelling and control to needle insertion in deep anterior lamellar keratoplasty (DALK), a challenging problem in surgical robotics.  Second, I will introduce a new software library, AutoMPC, created to automate the design of data-driven model predictive controllers and make state-of-the-art algorithms more accessible for a wide range of applications.

Bios: William Edwards is a third-year Computer Science PhD student in the Intelligent Motion Laboratory at UIUC, advised by Dr. Kris Hauser.  He received Bachelor’s degrees in Computer Science and Mathematics from the University of South Carolina in 2019.  His research interests include motion planning, dynamics learning, and optimization.

10/29/21 – Student Talks

Title: Semi-Infinite Programming’s Application in Robotics

Speakers: Mengchao Zhang  , UIUC

Abstract: In optimization theorysemi-infinite programming (SIP) is an optimization problem with a finite number of variables and an infinite number of constraints, or an infinite number of variables and a finite number of constraints. In this talk, I will introduce our work which uses SIP to solve the problems in the field of robotics.

In the semi-infinite program with complementarity constraints (SIPCC) work, we use SIP to address the problem that contact is an infinite phenomenon involving continuous regions of interaction. Our method enables a gripper to find a feasible pose to hold (non-)convex objects while ensuring force and torque balance. In the non-penetration iterative closest points for single-view multi-object 6D pose estimation work, we use SIP to solve the penetration between (non-)convex objects. Through introducing non-penetration constraints to the framework of iterative closest points (ICP), we improve the pose estimation result’s accuracy of deep neural network based method. Also, our method outperforms the best result on the IC-BIN dataset in the Benchmark for 6D Object Pose Estimation.

Bios: Mengchao Zhang is a Ph.D. student in the IML laboratory at UIUC. His research interest includes motion planning, manipulation, perception, and optimization.

Title: Data-driven MPC: Applications and Tools

Speakers: William Edwards  , UIUC

Abstract: Many of the most exciting and challenging applications in robotics involve control of novel systems with unknown nonlinear dynamics.  When such systems are infeasible to model analytically or numerically, roboticists often turn to data-driven techniques for modelling and control.  This talk will cover two projects relating to this theme.  First, I will discuss an application of data-driven modelling and control to needle insertion in deep anterior lamellar keratoplasty (DALK), a challenging problem in surgical robotics.  Second, I will introduce a new software library, AutoMPC, created to automate the design of data-driven model predictive controllers and make state-of-the-art algorithms more accessible for a wide range of applications.

Bios: William Edwards is a third-year Computer Science PhD student in the Intelligent Motion Laboratory at UIUC, advised by Dr. Kris Hauser.  He received Bachelor’s degrees in Computer Science and Mathematics from the University of South Carolina in 2019.  His research interests include motion planning, dynamics learning, and optimization.

10/08/21 – Faculty Talks

Title: Research at the RoboDesign Lab at UIUC

Speakers: Joas Ramos  , Assistant Professor at the UIUC

Abstract: The research at RoboDesign Lab intersects the study of the design, control, and dynamics of robots in parallel with human-machine interaction. We focus on developing hardware, software, and human-centered approaches to push the physical limits of robots to realize physically demanding tasks. In this talk, I will cover several ongoing research topics in the lab, such as the development of custom Human-Machine Interface (HMI) that enables bilateral teleoperation of mobile robots, a wheeled humanoid robot for dynamic mobile manipulation, actuation design for dynamic humanoid robots, and assistive devices for individuals with mobility impairment.

Bios:Joao Ramos is an Assistant Professor at the University of Illinois at Urbana-Champaign and the director of the RoboDesign Lab. He previously worked as a Postdoctoral Associate at the Biomimetic Robotics Laboratory, at the Massachusetts Institute of Technology. He received a PhD from the Department of Mechanical Engineering at MIT in 2018. He is the recipient of the 2021 NSF CARRER Award. His research focuses on the design and control of dynamic robotic systems, in addition to human-machine interfaces, legged locomotion dynamics, and actuation systems.

10/01/21 – Student Talks

Title: A multi-sensor fusion for agricultural autonomous navigation.

Speakers: Mateus Valverde Gasparino , UIUC

Abstract: In most agricultural setups, vehicles count on accurate GNSS estimations to autonomously navigate. However, for small under-canopy robots, it is not possible to guarantee accurate position estimation when between crop rows. To address this problem, we describe in this presentation a solution to autonomously navigate in a semi-structured agricultural environment. We demonstrate a navigation system that autonomously chooses between reference modalities to cover long areas in the farm and increase the navigation range to not only crop rows. By choosing the best reference to follow, the robot can accommodate for signal attenuation in GNSS and use the agricultural structure to autonomously navigate. A low-cost and compact robotic platform, designed to automate the measurements of plant traits, is used to implement and evaluate the system. We show two different perception systems that can be used on this framework: a LiDAR-based and a vision-based perception. We validate the system in a real agricultural environment, and we show it can effectively navigate for 4.5 km with only 6 human interventions.

Bios:Mateus Valverde Gasparino is a second-year Ph.D. advised by Prof Girish Chowdhary at the University of Illinois at Urbana-Champaign. He was awarded an M.Sc. degree in mechanical engineering and a Bachelor’s degree in mechatronics engineering from the University of São Paulo, Brazil. He is currently a graduate research assistant in the Distributed Autonomous Systems Laboratory (DASLab), and his research interests include perception systems, mapping, control, and learning for robots in unstructured and semi-structured outdoor environments.

Title: Learned Visual Navigation for Under-Canopy Agricultural Robots

Speakers: Arun Narenthiran Sivakumar , UIUC

Abstract: : We describe a system for visually guided autonomous navigation of under-canopy farm robots. Low-cost under-canopy robots can drive between crop rows under the plant canopy and accomplish tasks that are infeasible for over-the-canopy drones or larger agricultural equipment. However, autonomously navigating them under the canopy presents a number of challenges: unreliable GPS and LiDAR, high cost of sensing, challenging farm terrain, clutter due to leaves and weeds, and large variability in appearance over the season and across crop types. We address these challenges by building a modular system that leverages machine learning for robust and generalizable perception from monocular RGB images from low-cost cameras, and model predictive control for accurate control in challenging terrain. Our system, CropFollow, is able to autonomously drive 485 meters per intervention on average, outperforming a state-of-the-art LiDAR based system (286 meters per intervention) in extensive field testing spanning over 25 km.

Bios:Arun Narenthiran Sivakumar is a third year Ph.D. student in the Distributed Autonomous Systems Laboratory (DASLAB) at UIUC advised by Prof. Girish Chowdhary. He received his Bachelor’s degree in Mechanical Engineering in 2017 from VIT University, India and Master’s degree in Agricultural and Biological Systems Engineering with a minor in Computer Science in 2019 from the University of Nebraska, Lincoln. His research interests are applications of vision and learning based robotics in agriculture.

 

9/24/21 – Guest Talk

Title: Hello Robot: Democratizing Mobile Manipulation

[MORE INFO]

Speakers: Aaron Edsinger, CEO and Cofounder and Charlie Kemp, CTO and Cofounder, Hello Robot

Abstract: Mobile manipulators have the potential to improve life for everyone, yet adoption of this emerging technology has been limited. To encourage an inclusive future, Hello Robot developed the Stretch RE1, a compact and lightweight mobile manipulator for research that achieves a new level of affordability. The Stretch RE1 and Hello Robot’s open approach are inspiring a growing community of researchers to explore the future of mobile manipulation. In this talk, we will present the Stretch RE1 and the growing community and ecosystem around it. We will present our exciting collaboration with Professor Wendy Roger’s lab and provide a live demonstration of Stretch. Finally, we will be announcing the Stretch Robot Pitch Competition — a collaboration with TechSage and Proctor and Gamble — where students have the opportunity to generate novel design concepts for Stretch that address the needs of individuals aging with disabilities at home.

There will also be information during the seminar on a competition where winners will receive a cash prize and be able to work with Hello Robot’s Stretch robot in the McKechnie Family LIFE Home.

Bios:

Aaron Edsinger, CEO and Cofounder: Aaron has a passion for building robots and robot companies. He has founded four companies focused on commercializing human collaborative robots. Two of these companies, Meka Robotics and Redwood Robotics, were acquired by Google in 2013. As Robotics Director at Google, Aaron led the business, product, and technical development of two of Google’s central investments in robotics. Aaron received his Ph.D. from MIT CSAIL in 2007.

Charlie Kemp, CTO and Cofounder: Charlie is a recognized expert on mobile manipulation. In 2007, he founded the Healthcare Robotics Lab at Georgia Tech, where he is an associate professor in the Department of Biomedical Engineering. His lab has conducted extensive research on mobile manipulators to assist older adults and people with disabilities. Charlie earned a B.S., M.Eng., and Ph.D. from MIT. He first met Aaron while conducting research in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).


Previous Talks:

See [MORE INFO] for links to speaker webpages:

This Semester’s First Talk:

9/17/21 – Faculty Talk

Title: Introduction of KIMLAB (Kinetic Intelligent Machine LAB)

Speaker:  Professor Kim, Associate Professor of Electrical and Computer Engineering, UIUC

Abstract:  In this talk, I will share what is going on in KIMLAB, Kinetic Intelligent Machine LAB. I will briefly introduce myself. And then I will introduce some robots, research, and equipment in KIMLAB. Also, I will explain how the current efforts are related to my previous research and future directions.

Bio:  Joohyung Kim is currently an Associate Professor of Electrical and Computer Engineering at the University of Illinois Urbana-Champaign. His research focuses on design and control for humanoid robots, systems for motion learning in robot hardware, and safe human-robot interaction. He received BSE and Ph.D. degrees in Electrical Engineering and Computer Science (EECS) from Seoul National University, Korea. Prior to joining UIUC, He was a Research Scientist in Disney Research doing research for animation character robots.

5/07/21 – Student Talks

Title: A Comparison Between Joint Space and Task Space Mappings for Dynamic Teleoperation of an Anthropomorphic Robotic Arm in Reaction Tests

Speaker: Sunyu, UIUC 

Abstract: Teleoperation—i.e., controlling a robot with human motion—proves promising in enabling a humanoid robot to move as dynamically as a human. But how to map human motion to a humanoid robot matters because a human and a humanoid robot rarely have identical topologies and dimensions. This work presents an experimental study that utilizes reaction tests to compare joint space and task space mappings for dynamic teleoperation of an anthropomorphic robotic arm that possesses human-level dynamic motion capabilities. The experimental results suggest that the robot achieved similar and, in some cases, human-level dynamic performances with both mappings for the six participating human subjects. All subjects became proficient at teleoperating the robot with both mappings after practice, despite that the subjects and the robot differed in size and link length ratio and that the teleoperation required the subjects to move unintuitively. Yet, most subjects developed their teleoperation proficiencies more quickly with task space mapping than with joint space mapping after similar amounts of practice. This study also indicates the potential values of three-dimensional task space mapping, a teleoperation training simulator, and force feedback to the human pilot for intuitive and dynamic teleoperation of a humanoid robot’s arms.

TitleSafe and Efficient Robot Learning Using Structured Policies

SpeakerAnqi Li, UW

AbstractTraditionally, modeling and control techniques have been regarded as the fundamental tools for studying robotic systems. Although they can provide theoretical guarantees, these tools make limiting modeling assumptions. Recently, learning-based methods have shown success in tackling problems that are challenging for traditional techniques. Despite its advantages, it is unrealistic to directly apply most learning algorithms to robotic systems due to issues such as sample complexity and safety concerns. In this line of work, we aim at making robot learning explainable, sample efficient, and safe by construction through encoding structure into policy classes. In particular, we focus on a class of structured policies for robotic problems with multiple objectives. The complex motions are generated by combining simple behaviors given by Riemannian Motion Policies (RMPs). It can be shown that the combined policy is stable if the individual policies satisfy a class of control Lyapunov conditions, which can imply safety. Given such a policy representation, we learn policies with such structure so that formal guarantees are provided. To do so, we keep the safety-critical policies, e.g. collision avoidance and joint limits policies, as fixed during learning. We can also make use of the known robot kinematics. We show learning with such structure effective on a number of learning from human demonstration tasks and reinforcement learning tasks.

 

4/30/21 – Faculty Talks

Title: Robotic Manipulation – From Representations to Actions

Speaker: Prof. Kaiyu Hang, Rice

Abstract: Dexterous manipulation is an integral task involving a number of sub­problems, such as perception, planning, and control. Problem representations, which are essential elements in a system defining what is actually the problem being considered, determines both the capability of a system and the feasibility of applying such a system in real tasks.

In this talk, I will introduce how good representations can convert difficult problems into easier ones. First, I will discuss the development of representations for grasp optimization, and show how a representation can simplify and unify the whole grasping system, including globally optimal grasp planning, sensing, adaptation, and control. By adapting such representations in various task scenarios, I further show how they can greatly facilitate other applications, such as grasp-­aware motion planning, optimal placement planning, and even dual­-arm manipulation. Second, I will introduce our work on underactuated manipulation using soft robotic hands. For underactuated hands without any joint encoders or tactile sensors, I present our representations that can enable a robot to interact with tabletop objects using nonprehensile manipulation to finally grasp it. After a grasp is obtained by such sensorless hands, I discuss our system that can register the object into its own hand-­object system via interactive perception, so as to eventually enable precise and controllable in-hand manipulation.

 

4/23/21 – Student Talks

Title: Optimization-based Control for Highly Dynamic Legged Locomotion

Speaker: Dr. Yanran Ding, UIUC 

Abstract: Legged animals in nature can perform highly dynamic movements elegantly and efficiently, whether it be running down a steep hill or leaping between branches. To transfer part of the animal agility to a legged robot would open countless possibilities in disaster response, transportation, and space exploration. The topic of this talk is motion control in highly dynamic legged locomotion applications. In this talk, instantaneous control of a small and agile quadruped Panther is presented in a squat jumping experiment where it reached a maximal height of 0.7 m using a quadratic program (QP)-based reactive controller. A short prediction horizon control is achieved in real-time with the model predictive control (MPC) framework. We present a representation-free MPC (RF-MPC) formulation that directly uses the rotation matrix to describe orientation, which enables complex 3D acrobatic motions that previously unachievable using Euler angles due to the presence of singularity. We experimentally validate the motion control methods on Panther.

TitleHand Modeling and Simulation Using Stabilized Magnetic Resonance Imaging

Speaker: Bohan Wang, USC

AbstractWe demonstrate how to acquire complete human hand bone anatomy (meshes) in multiple poses using magnetic resonance imaging (MRI). Such acquisition was previously difficult because MRI scans must be long for high-precision results (over 10 minutes) and because humans cannot hold the hand perfectly still in non-trivial and badly supported poses. We invent a manufacturing process whereby we use lifecasting materials commonly employed in film special effects industry to generate hand molds, personalized to the subject, and to each pose. These molds are both ergonomic and encasing, and they stabilize the hand during scanning. We also demonstrate how to efficiently segment the MRI scans into individual bone meshes in all poses, and how to correspond each bone’s mesh to same mesh connectivity across all poses. Next, we interpolate and extrapolate the MRI-acquired bone meshes to the entire range of motion of the hand, producing an accurate data-driven animation-ready rig for bone meshes. We also demonstrate how to acquire not just bone geometry (using MRI) in each pose, but also a matching highly accurate surface geometry (using optical scanners) in each pose, modeling skin pores and wrinkles. We also give a soft tissue Finite Element Method simulation “rig”, consisting of novel tet meshing for stability at the joints, spatially varying geometric and material detail, and quality constraints to the acquired skeleton kinematic rig. Given an animation sequence of hand joint angles, our FEM soft tissue rig produces quality hand surface shapes in arbitrary poses in the hand range of motion. Our results qualitatively reproduce important features seen in the photographs of the subject’s hand, such as similar overall organic shape and fold formation.

4/16/21 – Faculty Talks

Title: Compositional Learning for Robot Autonomy via Modularity and Abstraction

Speaker: Dr. Rahul Shome, Rice

Abstract: Endowing robots the ability to solve real-world problems needs a careful design of approaches that can address tasks that can involve multiple robots, objects, and motions. Recent results have demonstrated a scalable, roadmap-based approach (dRRT*) to effectively decompose the search space to plan motions for multiple articulated robots. Variants of task and motion planning problems that require object rearrangement have been mapped to other efficiently solvable combinatorial problems over high-level structures that can guide effective solution discovery. Recent insights into the theoretical property of asymptotic optimality can inform the design of methods that provide solution quality guarantees in planning for a rich class of tasks and motions.
 
 

4/09/21 – Student Talks

Title: Long-Term Pedestrian Trajectory Prediction Using Mutable Intention Filter and Warp LSTM

Speaker: Zhe Huang, UIUC 

Abstract: Trajectory prediction is one of the key capabilities for robots to safely navigate and interact with pedestrians. Critical insights from human intention and behavioral patterns need to be integrated to effectively forecast long-term pedestrian behavior. Thus, we propose a framework incorporating a mutable intention filter and a Warp LSTM (MIF-WLSTM) to simultaneously estimate human intention and perform trajectory prediction. The mutable intention filter is inspired by particle filtering and genetic algorithms, where particles represent intention hypotheses that can be mutated throughout the pedestrian’s motion. Instead of predicting sequential displacement over time, our Warp LSTM learns to generate offsets on a full trajectory predicted by a nominal intention-aware linear model, which considers the intention hypotheses during filtering process. Through experiments on a publicly available dataset, we show that our method outperforms baseline approaches and demonstrate the robust performance of our method under abnormal intention-changing scenarios.

TitleRobot Learning through Interactions with Humans

Speaker: Shuijing Liu, UIUC

AbstractAs robots are becoming prevalent in people’s daily lives, it is important for them to learn to make intelligent decisions in interactive environments with humans. In this talk, I will present our recent works on learning-based robot decision making, through different types of human-robot interactions. In one line of work, we study robot navigation in human crowds and propose a novel deep neural network that enables the robot to reason about its spatial and temporal relationships with humans. In addition, we seek to improve human-robot collaboration in crowd navigation through active human intent estimation. In another line of work, we explore the interpretation of sound for robot decision making, inspired by human speech comprehension. Similar to how humans map a sound to meaning, we propose an end-to-end deep neural network that directly interprets sound commands for visual-based decision making. We continue this work by developing robot sensorimotor contingency with sound, sight, and motors through self-supervised learning.

4/02/21 – Faculty Talks

Title: Compositional Learning for Robot Autonomy via Modularity and Abstraction

Speaker: Assistant Professor Yuke Zhu, UT Austin

Abstract: Building robot intelligence for long-term autonomy demands robust perception and decision-making algorithms at scale. Recent advances in deep learning have achieved impressive results on end-to-end learning of robot behaviors from pixels to torques. However, the prohibitive costs of training for sophisticated behaviors have told us: “There is no ladder to the moon.” I argue that the functional decomposition of the pixels-to-torques problem via modularity and abstraction is the key to scaling up robot learning methods. In this talk, I will present our recent work on compositional modeling of robot autonomy. I will discuss our algorithms for developing state and action abstractions from raw signals. With these abstractions, I will introduce our work on neuro-symbolic planners that achieve compositional generalization in long-horizon manipulation tasks.

3/26/21 – Student Talks

Title: Control, Estimation and Planning for Coordinated Transport of a Slung Load By a Team of Aerial Robots

Speaker: Junyi Geng, UIUC 

Abstract: This talk will discuss the development of a self-contained transportation system that uses multiple autonomous aerial robots to cooperatively transport a single slung load. A “load-leading” concept is proposed and developed in this cooperative transportation problem. Different from existing approaches that usually fly a formation and treat the external slung load as disturbance, which ignore the payload dynamics, this approach proposes to attach sensors onboard the payload so that the payload can sense itself and lead the whole fleet. This unique design leads to a hierarchical load-leading control strategy, which is scalable and allows human-in-the-loop in addition to fully autonomous operation. It also enables a strategy for estimating payload parameters so as to improve the model accuracy. By manipulating the payload through the cables driven by the drones, the payload inertial parameters can be estimated. The close-loop performance is thus improved. The payload design also leads to convenient cooperative trajectory planning, which reduces to a simpler planning problem for the payload. Lastly, a load distribution based trajectory planning and control approach is developed to achieve near equal load distribution among the aerial vehicles for energy efficiency. This whole payload leading design enables the cooperative transportation team to fly longer, further and smarter. Components of this system are tested in simulation, indoor and outdoor flight experiments and demonstrate the effectiveness of the developed slung load transportation system.

Title: REDMAX: Efficient & Flexible Approach for Articulated Dynamics

Speaker: Ying Wang, Taxes A&M

Abstract: Industrial manipulators do not collapse under their own weight when powered off due to the friction in their joints. Although these mechanism are effective for stiff position control of pick-and-place, they are inappropriate for legged robots which must rapidly regulate compliant interactions with the environment. However, no metric exists to quantify the robot’s perform degradation due to mechanical losses in the actuators. We provide a novel formulation which describes how the efficiency of individual actuators propagate to the equations of motion of the whole robot. We quantitatively demonstrate the intuitive fact that the apparent inertia of the robots increase in the presence of joint friction. We also reproduce the empirical result that robots which employ high gearing and low efficiency actuators can statically sustain more substantial external loads. We expect that this framework will provide the foundations to design the next generation of legged robots which can effectively interact with the world.

 

3/12/21 – Student Talks

Title: Creating Practical Magnetic Indoor Positioning Systems

Speaker: David, Hanley, UIUC 

Abstract: Steel studs, HVAC systems, rebar, and many other building components produce spatially varying magnetic fields. Magnetometers can measure these fields and can be used in combination with inertial sensors for indoor navigation of robots and handheld devices like smartphones. This talk will take an empirical approach to improving the performance of magnetic field-based navigation systems in practice. In support of this goal, a dataset—to improve empirical studies of these systems within the research community—shall be described. Then the impact a commonly used “planar assumption” has on the accuracy of current magnetic field-based navigation systems will be presented. The lack of robustness shown in this evaluation motivates both new algorithms for this type of navigation and new hardware, progress on both of which will be discussed.

TitleResidual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis 

Speaker: Sim, Youngwoo, UIUC

Abstract: Industrial manipulators do not collapse under their own weight when powered off due to the friction in their joints. Although these mechanism are effective for stiff position control of pick-and-place, they are inappropriate for legged robots which must rapidly regulate compliant interactions with the environment. However, no metric exists to quantify the robot’s perform degradation due to mechanical losses in the actuators. We provide a novel formulation which describes how the efficiency of individual actuators propagate to the equations of motion of the whole robot. We quantitatively demonstrate the intuitive fact that the apparent inertia of the robots increase in the presence of joint friction. We also reproduce the empirical result that robots which employ high gearing and low efficiency actuators can statically sustain more substantial external loads. We expect that this framework will provide the foundations to design the next generation of legged robots which can effectively interact with the world.

 

3/05/21 – Faculty Talks

Title: Collaborative Construction and Communication with Minecraft

Speaker: Associate Professor Julia Hockenmaier, UIUC

Abstract: Virtual gaming platforms such as Minecraft allow us to study situated natural language generation and understanding tasks for agents that operate in complex 3D environments.   In this talk, I will present work done by my group on defining a collaborative Blocks World construction task in Minecraft. In this task, one player (the Architect) needs to instruct another (the Builder) via a chat interface to construct a given target structure that only the Architect is shown. Although humans easily complete this task (often after lengthy back-and-forth dialogue), creating agents for each of this role poses a number of challenges for current NLP technologies. To understand these challenges, I will describe the dataset we have collected for this task, as well as the models that we have developed for both roles. I look forward to a discussion for how to adapt this work to natural language communication with actual robots rather than simulated agents.

Bio: Julia Hockenmaier is an associate professor at the University of Illinois at Urbana-Champaign. She has received a CAREER award for her work on CCG-based grammar induction and an IJCAI-JAIR Best Paper Prize for her work on image description. She has served as member and chair of the NAACL board, president of SIGNLL, and as program chair of CoNLL 2013 and EMNLP 2018.

2/19/21 – Student Talks

Title: Optimization-Based Visuotactile Deformable Object Capture 

Speaker: Zherong Pan, UIUC 

Abstract: Robot interacts with deformable objects all the time and relies on the perception system to estimate their state. While the shape of an object can be captured visually, their physical properties must be estimated by tactile interactions. We propose an optimization-based formulation that reconstructs a simulation-readydeformable object from multiple drape shapes under gravity. Starting from a trivial initial guess, our method optimizes both the rest shape and the material parameters to register the mesh with observed multi-view point cloud data, where we derive analytic gradients from implicit function theorem. We further interleave the optimization with remeshing operators to ensure a high quality of mesh. Experiments on beam recovery problems show that our optimizer can infer internalanisotropic material distributions and a large variation of rest shapes.

TitleResidual Force Control for Agile Human Behavior Imitation and Extended Motion Synthesis 

SpeakerYe Yuan, CMU 

Abstract: Reinforcement learning has shown great promise for synthesizing realistic human behaviors by learning humanoid control policies from motion capture data. However, it is still very challenging to reproduce sophisticated human skills like ballet dance, or to stably imitate long-term human behaviors with complex transitions. The main difficulty lies in the dynamics mismatch between the humanoid model and real humans. That is, motions of real humans may not be physically possible for the humanoid model. To overcome the dynamics mismatch, we propose a novel approach, residual force control (RFC), that augments a humanoid control policy by adding external residual forces into the action space. During training, the RFC-based policy learns to apply residual forces to the humanoid to compensate for the dynamics mismatch and better imitate the reference motion. Experiments on a wide range of dynamic motions demonstrate that our approach outperforms state-of-the-art methods in terms of convergence speed and the quality of learned motions. Notably, we showcase a physics-based virtual character empowered by RFC that can perform highly agile ballet dance moves such as pirouette, arabesque and jeté. Furthermore, we propose a dual-policy control framework, where a kinematic policy and an RFC-based policy work in tandem to synthesize multi-modal infinite-horizon human motions without any task guidance or user input. Our approach is the first humanoid control method that successfully learns from a large-scale human motion dataset (Human3.6M) and generates diverse long-term motions. 

2/12/21 – Opening Panel Discussion

How COVID-19 Affects Your Research?

Abstract: The COVID-19 pandemic is associated with an unprecedented impact on US academia and research enterprises. On the downside, university revenues have declined due to undergraduate enrollment drops and unstable external funding sources. Many traditional research activities have been suspended since last spring, especially in the STEM fieldsThis also revealed a limitation in collaboration and communication facilities and services. On the upside, however, the pandemic has served as a catalyst for the increased use and innovation in robotics. This involves autonomous devices foinfection control, temperature taking, and movement tracking. In addition, social robotics and virtual avatars help people stay connected and reduce anxiety during the quarantine. In this seminar, we would invite four faculty membersNegar Mehr, Joao Ramos, Geir Dullerud, and Kris Hauser, to share their experience on the challenges and opportunities brought by the pandemic. 


Talks from Inaugural Semester:

The Robots are Coming – to your Farm!  Autonomous and Intelligent Robots in Unstructured Field Environments

Girish Chowdhary
Assistant Professor
Agricultural & Biological Engineering
Distributed Autonomous Systems Lab
November 22nd, 2019

Abstract: What if a team of collaborative autonomous robots grew your food for you? In this talk, I will discuss some key advances in robotics, machine learning, and autonomy that will one day enable teams of small robots to grow food for you in your backyard in a fundamentally more sustainable way than modern mega-farms. Teams of small aerial and ground robots could be a potential solution to many of the problems that modern agriculture faces. However, fully autonomous robots that operate without supervision for weeks, months, or entire growing season are not yet practical. I will discuss my group’s theoretical and practical work towards the underlying challenging problems in autonomy, sensing, and learning. I will begin with our lightweight, compact, and highly autonomous field robot TerraSentia and their recent successes in high-throughput phenotyping for agriculture. I will also discuss new algorithms for enabling a team of robots to weed large agricultural farms autonomously under partial observability. These direct applications will then lead up to my group’s more fundamental work in reinforcement learning and adaptive control that we believe are necessary to usher in the next generation of autonomous field robots that operate in harsh, changing, and dynamic environments.

Bio: Girish Chowdhary is an assistant professor at the University of Illinois at Urbana-Champaign with the Coordinated Science Laboratory, and the director of the Distributed Autonomous Systems laboratory at UIUC. At UIUC, Girish is affiliated with Agricultural and Biological Engineering, Aerospace Engineering, Computer Science, and Electrical Engineering. He holds a PhD (2010) from Georgia Institute of Technology in Aerospace Engineering. He was a postdoc at the Laboratory for Information and Decision Systems (LIDS) of the Massachusetts Institute of Technology (2011-2013), and an assistant professor at Oklahoma State University’s Mechanical and Aerospace Engineering department (2013-2016). He also worked with the German Aerospace Center’s (DLR’s) Institute of Flight Systems for around three years (2003-2006). Girish’s ongoing research interest is in theoretical insights and practical algorithms for adaptive autonomy, with a particular focus on field-robotics. He has authored over 90 peer reviewed publications in various areas of adaptive control, robotics, and autonomy. On the practical side, Girish has led the development and flight-testing of over 10 research UAS platform. UAS autopilots based on Girish’s work have been designed and flight-tested on six UASs, including by independent international institutions. Girish is an investigator on NSF, AFOSR, NASA, ARPA-E, and DOE grants. He is the winner of the Air Force Young Investigator Award, the Aerospace Guidance and Controls Systems Committee Dave Ward Memorial award, and several best paper awards, including a best systems paper award at RSS 2018 for his work on the agricultural robot TerraSentia. He is the co-founder of EarthSense Inc., working to make ultralight outdoor robotics a reality.

Student Talks:

Design and Control of a Quadrupedal Robot for Dynamic Locomotion

Yanran Ding
November 15th, 2019

Abstract: Legged animals have shown their versatile mobility to traverse challenging terrains via a variety of well-coordinated dynamic motions. This remarkable mobility of legged animals inspired the development of many legged robots and associated research works seeking for dynamic legged locomotion in robots. This talk explores the design and control of a small-scale quadrupedal robot prototype for dynamic motions. Here we present a hardware-software co-design scheme for the proprioceptive actuator and the model predictive control (MPC) framework for a wide variety of dynamic motions

Bio: Yanran Ding is a 4-th year Ph.D. student in the Mechanical Science and Engineering Department at University of Illinois at Urbana-Champaign. He received his B.S. degree in Mechanical Engineering from UM-SJTU joint institute, Shanghai Jiao Tong University, Shanghai, China in 2015. His research interests include the design of agile robotic systems and optimization-based control for legged robots to achieve dynamic motions. He is one of the best student paper finalists in the International Conference on Intelligent Robots and Systems (IROS) 2017.

Adapt-to-Learn:  Policy Transfer in Reinforcement Learning

Girish Joshi
November 15th, 2019

Abstract: Efficient and robust policy transfer remains a key challenge in reinforcement learning. Policy transfer through warm initialization, imitation, or interacting over a large set of agents with randomized instances, have been commonly applied to solve a variety of Reinforcement Learning (RL) tasks. However, this is far from how behavior transfer happens in the biological world: Humans and animals are able to quickly adapt the learned behaviors between similar tasks and learn new skills when presented with new situations. We introduce a principled mechanism that can “Adapt-to-Learn”, that is adapt the source policy to learn to solve a target task with significant transition differences and uncertainties.  We show through theory and experiments that our method leads to a significantly reduced sample complexity of transferring the policies between the tasks.

Bio: Girish Joshi is graduate student at DASLAB UIUC working under Dr. Girish Chowdhary. Prior to joining UIUC he did his masters in Indian Institute of Science Bangalore. His research interest are in Sample Efficient Policy Transfer in RL, Cross Domain skill transfer in RL, Information enabled Adaptive Control for Cyber-Physical Systems and Bayesian Nonparametric Approach in Adaptive Control and Decision making in Non-Stationary Environment.

Designing Robots to Support Successful Aging: Potential and Challenges

Wendy Rogers
Professor, Kinesiology and Community Health
Human Factors and Aging Laboratory
November 8th, 2019

Abstract: There is much potential for robots to support older adults in their goal of successful aging with high quality of life.  However, for human-robot interactions to be successful, the robots must be designed with user needs, preferences, and attitudes in mind.  The Human Factors and Aging Laboratory (www.hfaging.org) is specifically oriented toward developing a fundamental understanding of aging and bringing that knowledge to bear on design issues important to the enjoyment, quality, and safety of everyday activities of older adults.  In this presentation, I will provide an overview of our research with robots: personal, social, telepresence.  We focus on the human side of human-robot interaction, answering questions such as, are older adults willing to interact with a robot?  What do they want the robot to do?  To look like?  How do they want to communicate with a robot?  Through research examples, I will illustrate the potential for robots to support successful aging as well as the challenges that remain for the design and widespread deployment of robots in this context.

Bio: Wendy A. Rogers, Ph.D. – Shahid and Ann Carlson Khan Professor of Applied Health Sciences at the University of Illinois Urbana-Champaign.  Her primary appointment is in the Department of Kinesiology and Community Health.  She also has an appointment in the Educational Psychology Department and is an affiliate faculty member of the Beckman Institute and the Illinois Informatics Institute. She received her B.A. from the University of Massachusetts – Dartmouth, and her M.S. (1989) and Ph.D. (1991) from the Georgia Institute of Technology.  She is a Certified Human Factors Professional (BCPE Certificate #1539). Her research interests include design for aging; technology acceptance; human-automation interaction; aging-in-place; human-robot interaction; aging with disabilities; cognitive aging; and skill acquisition and training.  She is the Director of the Health Technology Graduate Program; Program Director of CHART (Collaborations in Health, Aging, Research, and Technology); and Director of the Human Factors and Aging Laboratory (www.hfaging.org). Her research is funded by: the National Institutes of Health (National Institute on Aging) as part of the Center for Research and Education on Aging and Technology Enhancement (www.create-center.org); and the Department of Health and Human Services (National Institute on Disability, Independent Living, and Rehabilitation Research; NIDILRR) Rehabilitation Engineering Research Center on Technologies to Support Aging-in-Place for People with Long-term Disabilities (www.rercTechSAge.org).  She is a fellow of the American Psychological Association, the Gerontological Society of America, and the Human Factors and Ergonomics Society.

Student Talks:

Towards Soft Continuum Arms for Real World Applications

Naveen Kumar Uppalapati
November 1st, 2019

Abstract: Soft robots are gaining significant attention from the robotics community due to their adaptability, safety, lightweight construction, and cost-effective manufacturing. They have found use in manipulation, locomotion, and wearable devices. In manipulation, Soft Continuum Arms (SCAs) are used to explore uneven terrains, handle objects of different sizes, and interact safely with the environment. Current SCAs uses a serial combination of multiple segments to achieve higher dexterity and workspace. However, serial architecture leads to an increase in overall weight, hardware, and power requirements, thus limiting their use for real-world applications. In this talk, I will give an insight into the design of compact and lightweight SCAs. The SCAs use pneumatically actuated Fiber Reinforced Elastomeric Enclosures (FREEs) as their building blocks. A single section BR2 SCA design is shown to have greater dexterity and workspace than the current state of art SCAs. I will present a hybrid between the soft arm and rigid links known as Variable Length Nested Soft (VaLeNS) arm, which was designed to obtain the attributes of stiffness modulation and force transfer. Finally, I will present a mobile robot prototype for berry picking application.

Bio: Naveen Kumar Uppalapati is a 6th year Ph.D. student in the Dept. of Industrial and Enterprise Systems Engineering at the University of Illinois. He received his bachelor’s degree in Instrumentation and Control Engineering in 2013 from the National Institute of Technology, Tiruchirappalli, and Master’s degree in Systems engineering in 2016 from the University of Illinois. His research interests are design and modeling of soft robots, sensors design, and controls.

Toward Human-like Teleoperated Robot Motion: Performance and Perception of a Choreography-inspired Method in Static and Dynamic Tasks for Rapid Pose Selection of Articulated Robots

Allison Bushman
November 1st, 2019

Abstract: In some applications, operators may want to create fluid, human-like motion on a remotely-operated robot, for example, a device used for remote telepresence in hostage negotiation. This paper examines two methods of controlling the pose of a Baxter robot via an Xbox One controller. The first method is a joint-by-joint (JBJ) method in which one joint of each limb is specified in sequence. The second method of control, named Robot Choreography Center (RCC), utilizes choreographic abstractions in order to simultaneously move multiple joints of the limb of the robot in a predictable manner. Thirty-eight users were asked to perform four tasks with each method. Success rate and duration of successfully completed tasks were used to analyze the performances of the participants. Analysis of the preferences of the users found that the joint-by-joint (JBJ) method was considered to be more precise, easier to use, safer, and more articulate, while the choreography-inspired (RCC) method of control was perceived as faster, more fluid, and more expressive. Moreover, performance data found that while both methods of control were over 80\% successful for the two static tasks, the RCC method was an average of 11.85\% more successful for the two more difficult, dynamic tasks. Future work will leverage this framework to investigate ideas of fluidity, expressivity, and human-likeness in robotic motion through online user studies with larger participant pools.

Bio: Allison Bushman is a second-year master’s student in the Dept. of Mechanical Engineering and Materials Science at the University of Illinois at Urbana-Champaign. She received her bachelor’s degree in mechanical engineering in 2014 from Yale University. She currently works in the RAD Lab to understand what parameters are necessary in deeming a movement as natural or fluid, particularly as it pertains to designing movement in robots.

Dynamic Synchronization of  Human Operator and Humanoid Robot via Bilateral Feedback Teleoperation

Joao Ramos
Assistant Professor
Mechanical Science and Engineering
[MORE INFO]

Abstract:

Autonomous humanoid robots are still far from matching the sophistication and adaptability of human’s perception and motor control performance. To address this issue, I investigate the utilization of human whole-body motion to command a remote humanoid robot in real-time, while providing the operator with physical feedback from the robot’s actions. In this talk, I will present the challenges of virtually connecting the human operator with a remote machine in a way that allows the operator to utilize innate motor intelligence to control the robot’s interaction with the environment. I present pilot experiments in which an operator controls a humanoid robot to perform power manipulation tasks, such as swinging a firefighter axe to break a wall, and dynamic locomotion behaviors, such as walking and jumping.

Bio:

Joao Ramos is an Assistant Professor at the University of Illinois at Urbana-Champaign. He previously worked as a Postdoctoral Associate working at the Biomimetic Robotics Laboratory, at the Massachusetts Institute of Technology. He received a PhD from the Department of Mechanical Engineering at MIT in 2018. During his doctoral research, he developed teleoperation systems and strategies to dynamically control a humanoid robot utilizing human whole-body motion via bilateral feedback. His research focus on the design and control of robotic systems that experiences large forces and impacts, such as the MIT HERMES humanoid, a prototype platform for disaster response. Additionally, his research interests include human-machine interfaces, legged locomotion dynamics, and actuation systems.

Some Thoughts on Learning Reward Functions

Bradly Stadie
Post-Doctoral Researcher
Vector Institute in Toronto
[MORE INFO]

Abstract:

In reinforcement learning (RL), agents typically optimize a reward function to learn a desired behavior. In practice, crafting reward functions that produce intended behaviors is fiendishly difficult. Due to the curse of dimensionality, sparse rewards are typically too difficult to optimize without carefully chosen curricula. Meanwhile, dense reward functions often encourage unintended behaviors or present overly cumbersome optimization landscapes. To handle these problems, a vast body of work on reward function design has emerged. In this talk, we will recast the reward function design problem into a learning problem. Specifically, we will consider two new algorithms for automatically learning reward functions. First, in Evolved Policy Gradients (EPG), we will carefully consider the problem of meta-learning reward functions. Given a distribution of tasks, can we meta-learn a parameterized reward function that generalizes to new tasks? Does this learned reward allow the agent to solve new tasks more efficiently than our original hand-designed rewards? Second, in Learning Intrinsic Rewards as a Bi-Level Optimization Problem, we consider the problem of learning a more effective reward function in the single-task setting. By using Self-Tuning Networks and tricks from the hyper-parameter optimization literature, we develop an algorithm that produces a better optimization landscape for the agent to learn against. This better optimization landscape ultimately allows the agent to achieve superior performance on a variety of challenging locomotion tasks, when compared to simply learning against the original hand-designed reward.

Bio:

Bradly Stadie is a postdoctoral researcher at the Vector Institute in Toronto, where he works with Jimmy Ba’s group. Bradly’s overarching research goal is to develop algorithms that allow machines learn as quickly and flexibly as humans do. At Toronto, Bradly has worked on a variety of topics including reward function learning, causal inference, neural network compression, and one shot imitation learning. Earlier in his career, he provided one of the first algorithms for efficient exploration in deep reinforcement learning. Bradly completed his PhD under Pieter Abbeel at UC Berkeley. He received his undergraduate degree in mathematics from the University of Chicago.

Human-like Robots and Robotic Humans: Who Engineers Who?

Ben Grosser
Associate Professor, School of Art + Design / NCSA
[MORE INFO]

Abstract:

For a while now we’ve watched robots regularly take on new human tasks, especially those that can be made algorithmic such as vacuuming the floor. But the same time frame has also seen growing numbers of experiments with artistic robots, machines made by artists that take on aesthetic tasks of production in art or music. This talk will focus on the complicated relationship between humans and machines by looking at a number of artworks by the author. These will include not only art making robots that many perceive as increasingly human, but also code-based manipulations of popular software systems that reveal how humans are becoming increasingly robotic. In an era when machines act like humans and humans act like machines, who is engineering who?

Bio:

Artist Ben Grosser creates interactive experiences, machines, and systems that examine the cultural, social, and political effects of software. Recent exhibition venues include the Barbican Centre in London, Museum Kesselhaus in Berlin, Museu das Comunicações in Lisbon, and Galerie Charlot in Paris. His works have been featured in The New Yorker, WiredThe AtlanticThe Guardian, The Washington Post, El País, Libération, Süddeutsche Zeitung, and Der Spiegel. The Chicago Tribune called him the “unrivaled king of ominous gibberish.” Slate referred to his work as “creative civil disobedience in the digital age.” His artworks are regularly cited in books investigating the cultural effects of technology, including The Age of Surveillance Capitalism, The Metainterface, Facebook Society, and Technologies of Vision, as well as volumes centered on computational art practices such as Electronic Literature, The New Aesthetic and Art, and Digital Art. Grosser is an associate professor in the School of Art + Design, co-founder of the Critical Technology Studies Lab at NCSA, and a faculty affiliate with the Unit for Criticism and the School of Information Sciences. https://bengrosser.com

 

Student Talks:

Multi-Contact Humanoid Fall Mitigation In Cluttered Environment

Shihao Wang
October 4, 2019

Abstract:

Humanoid robots are expected to take critical roles in our real-world in the future. However, this dream cannot be achieved until a reliable fall mitigation strategy has been proposed and validated. Due to a high center of mass position of the robot, robots have a high risk of falling down to the ground. In this case, we would like robot to utilize its nearby environment object for fall protection. This presentation discusses about my past work on robot fall protection with planning multi-contact fashion for fall recovery in cluttered environment. We believe that the capability of making use of robot’s contact(s) contributes an effective solution for fall protection.

Bio:

Shihao Wang is a 4th-year Ph.D. student of Department of Mechanical Engineering and Materials Science at Duke University. He is originally from China, received Bachelor degree in Mechanical Engineering from Beihang University in June 2014 and received Master degree in Mechanical Engineering from Cornell University in June 2015. After one year research at Penn State University, he joined Duke in Fall 2016 for Ph.D, research which is focused on Robotics, Legged Locomotion, Dynamic Walking and Controls.

Optimal Control Learning by Mixture of Experts

Gao Tang
October 4, 2019

Abstract:

Optimal control problems are critical to solve for task efficiency. However, the nonconvexity limits its application, especially in time-critical tasks. Practical applications often require a parametric optimization problem being solved which is essentially a mapping from problem parameters to problem solutions. We study how to learn this mapping from offline precomputation. Due to the existence of local optimum, the mapping may be discontinuous. This presentation discusses how to use the mixture of experts model to learn this discontinuous function accurately to achieve high reliability in robotic applications.

Bio:

Gao Tang is a 4-th year Ph.D. student of the Department of Computer Science at the University of Illinois at Urbana-Champaign. Before coming to UIUC, he spent 3 years as a Ph.D. student at Duke University. He received Bachelor and Master degree from Tsinghua University in Aerospace Engineering. His research is focused on numerical optimization and motion planning.

Bioinspired Aerial and Terrestrial Locomotion Strategies

[MORE INFO]

Aimy Wissa
Assistant Professor, Mechanical Science and Engineering
Bio-inspired Adaptive Morphology (BAM) Lab
Sept. 27th, 2019

Abstract:

Nature has evolved various locomotion (self-propulsion) and shape adaptation (morphing) strategies to survive and thrive in diverse and uncertain environments. Both in air and on the ground, natural organisms continue to surpass engineered unmanned aerial and ground vehicles. Key strategies that Nature often exploits include local elasticity and adaptiveness to simplify global actuation and control.  Unlike engineered systems, which rely heavily on active control, natural structures tend to also rely on reflexive and passive control. This approach of diverse control strategies yields multifunctional structures. Two examples of multifunctional structures will be presented in this talk, namely avian- inspired deployable structures and click beetle-inspired legless jumping mechanism.

The concept of wings as multifunctional adaptive structures will be discussed and several flight devices found on birds’ wings will be introduced as a pathway towards revolutionizing the current design of small-unmanned air vehicles. Experimental, analytical, and numerical results will be presented to discuss the efficacy of such devices. The discussion of avian-inspired devices will be followed by an introduction of a click beetle-inspired jumping mechanism that exploits distributed springs to circumvent muscle limitations, such a mechanism can bypass shortcomings of smart actuators especially in small-scale robotics applications.

Student Talks:

CyPhyHouse: A programming, simulation, and deployment toolchain for heterogeneous distributed coordination

[MORE INFO]

Ritwika Ghosh
September 20th, 2019

Abstract:

Programming languages, libraries, and development tools have transformed the application development processes for mobile computing and machine learning. CyPhyHouse is a toolchain that aims to provide similar programming, debugging, and deployment benefits for distributed mobile robotic applications. Users can develop hardware-agnostic, distributed applications using the high-level, event driven Koord programming language, without requiring expertise in controller design or distributed network protocols, I will talk about the CyPhyHouse toolchain, its design, implementation, challenges faced and lessons learnt in the process.

Bio:

Ritwika Ghosh is a 6th year PhD student in the dept. of Computer Science at the University of Illinois. She received her Bachelors in 2013 from Chennai Mathematical Institute in India in 2013 in Math and Computer Science. Her research interests are Formal Methods, Programming Langauges and Distributed Systems.

Controller Synthesis Made Real: Reachavoid Specifications and Linear Dynamics

Chuchu Fan
September 20th, 2019
CSL Studio Rm 1232

Abstract:

The controller synthesis question asks whether an input can be generated for a given system (or a plant) so that it achieves a given specification. Algorithms for answering this question hold the promise of automating controller design. They have the potential to yield high-assurance systems that are correct-by-construction, and even negative answers to the question can convey insights about the unrealizability of specifications. There has been a resurgence of interest in controller synthesis, with the rise of powerful tools and compelling applications such as vehicle path planning, motion control, circuits design, and various other engineering areas. In this talk, I will introduce a novel approach relying on symbolic sensitivity analysis to synthesize provably correct controllers efficiently for large linear systems with reach-avoid specifications. Our solution uses a combination of an open-loop controller and a tracking controller, thereby reducing the problem to smaller tractable problems such as satisfiability over quantifier-free linear real arithmetic. I will also present RealSyn, a tool implementing the synthesis algorithm, which has been shown to scale to several high-dimensional systems with complex reach-avoid specifications.

Bio:

Chuchu Fan is finishing up her Ph.D. in the Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She will join the AeroAstro Department of MIT as an assistant professor in 2020. She received her Bachelor’s degree from Tsinghua University, Department of Automation in 2013. Her research interests are in the areas of formal methods and control for safe autonomy.

Dancing With Robots: Questions About Composition with Natural and Artificial Bodies

[MORE INFO]

Amy LaViers – Robotics, Automation, and Dance (RAD) Lab
September 13th, 2019

Abstract:

The formulation of questions is a central yet non-specific activity: an answer can be sought through many modes of investigation, such as, scientific inquiry, research and development, or the creation of art.  This talk will outline guiding questions for the Robotics, Automation, and Dance (RAD) Lab, which are explored via artistic creation alongside research in robotics, each vein of inquiry informing the other, and, then, will focus on a few initial answers in the form of robot-augmented dances, digital spaces that track the motion of participants, artistic extensions to student engineering theses, and participatory performances that employ the audience’s own personal machines.  For example, guiding questions include: By what measure do robots outperform humans? By what measures do humans outperform robots?  How many ways can a human walk?  Is movement a continuous phenomenon?  Does it convey information?  What is the utility of dance?  What biases do new technologies hold?  What structures can reasonably be named “leg”, “arm”, “hand”, “wing” and the like? Why does dancing feel so different than programming?   What does it mean for two distinct bodies to move in unison?  How does it feel to move alongside a robot?  In order to frame these questions in an engineering context, this talk also presents an information-theoretic model of expression through motion, where artificial systems are modeled as a source communicating across a channel to a human receiver.

Bio:

Amy LaViers is an assistant professor in the Mechanical Science and Engineering Department at the University of Illinois at Urbana-Champaign (UIUC) and director of the Robotics, Automation, and Dance (RAD) Lab.  She is a recipient of a 2015 DARPA Young Faculty Award (YFA) and 2017 Director’s Fellowship.  Her teaching has been recognized on UIUC’s list of Teachers Ranked as Excellent By Their Students, with Outstanding distinction.  Her choreography has been presented internationally, including at Merce Cunningham’s studios, Joe’s Pub at the Public Theater, the Ferst Center for the Arts, and the Ammerman Center for Arts and Technology.  She is a co-founder of two startup companies:  AE Machines, Inc, an automation software company that won Product Design of the Year at the 4th Revolution Awards in Chicago in 2017 and was a finalist for Robot of the Year at Station F in Paris in 2018, and caali, LLC, an embodied media company that is developing an interactive installation at the Microsoft Technology Center in Chicago.  She completed a two-year Certification in Movement Analysis (CMA) in 2016 at the Laban/Bartenieff Institute of Movement Studies (LIMS).  Prior to UIUC she held a position as an assistant professor in systems and information engineering at the University of Virginia.  She completed her Ph.D. in electrical and computer engineering at Georgia Tech with a dissertation that included a live performance exploring stylized motion.  Her research began in her undergraduate thesis at Princeton University where she earned a certificate in dance and a degree in mechanical and aerospace engineering.