The Illinois Robotics Group is proud to host the Robotics@Illinois Seminar Series. These seminars provide a diverse lineup of speakers reflecting the interdisciplinary nature of the field of robotics.
We are hosting speakers from various departments on campus conducting research in the field of robotics at Illinois. The talks are alternating between professors and students each week, with occasional demonstrations afterwards in the Intelligent Robotics Lab.
Talks are held in the CSL Studio conference room (1232) just west of the Intelligent Robotics Lab facilities.
Talks Have Been Postponed for the Semester.
Please Feel Free to Recommend Speakers for Future Talks.
If you have comments or questions on the IRL Seminar Series, please feel free to contact John M. Hart, email@example.com, Manager and Laboratory Coordinator of the CSL Shared Robotics Spaces
See [MORE INFO] for links to speaker webpages:
The Robots are Coming – to your Farm! Autonomous and Intelligent Robots in Unstructured Field Environments
Agricultural & Biological Engineering
Distributed Autonomous Systems Lab
November 22nd, 2019
Abstract: What if a team of collaborative autonomous robots grew your food for you? In this talk, I will discuss some key advances in robotics, machine learning, and autonomy that will one day enable teams of small robots to grow food for you in your backyard in a fundamentally more sustainable way than modern mega-farms. Teams of small aerial and ground robots could be a potential solution to many of the problems that modern agriculture faces. However, fully autonomous robots that operate without supervision for weeks, months, or entire growing season are not yet practical. I will discuss my group’s theoretical and practical work towards the underlying challenging problems in autonomy, sensing, and learning. I will begin with our lightweight, compact, and highly autonomous field robot TerraSentia and their recent successes in high-throughput phenotyping for agriculture. I will also discuss new algorithms for enabling a team of robots to weed large agricultural farms autonomously under partial observability. These direct applications will then lead up to my group’s more fundamental work in reinforcement learning and adaptive control that we believe are necessary to usher in the next generation of autonomous field robots that operate in harsh, changing, and dynamic environments.
Bio: Girish Chowdhary is an assistant professor at the University of Illinois at Urbana-Champaign with the Coordinated Science Laboratory, and the director of the Distributed Autonomous Systems laboratory at UIUC. At UIUC, Girish is affiliated with Agricultural and Biological Engineering, Aerospace Engineering, Computer Science, and Electrical Engineering. He holds a PhD (2010) from Georgia Institute of Technology in Aerospace Engineering. He was a postdoc at the Laboratory for Information and Decision Systems (LIDS) of the Massachusetts Institute of Technology (2011-2013), and an assistant professor at Oklahoma State University’s Mechanical and Aerospace Engineering department (2013-2016). He also worked with the German Aerospace Center’s (DLR’s) Institute of Flight Systems for around three years (2003-2006). Girish’s ongoing research interest is in theoretical insights and practical algorithms for adaptive autonomy, with a particular focus on field-robotics. He has authored over 90 peer reviewed publications in various areas of adaptive control, robotics, and autonomy. On the practical side, Girish has led the development and flight-testing of over 10 research UAS platform. UAS autopilots based on Girish’s work have been designed and flight-tested on six UASs, including by independent international institutions. Girish is an investigator on NSF, AFOSR, NASA, ARPA-E, and DOE grants. He is the winner of the Air Force Young Investigator Award, the Aerospace Guidance and Controls Systems Committee Dave Ward Memorial award, and several best paper awards, including a best systems paper award at RSS 2018 for his work on the agricultural robot TerraSentia. He is the co-founder of EarthSense Inc., working to make ultralight outdoor robotics a reality.
Design and Control of a Quadrupedal Robot for Dynamic Locomotion
November 15th, 2019
Abstract: Legged animals have shown their versatile mobility to traverse challenging terrains via a variety of well-coordinated dynamic motions. This remarkable mobility of legged animals inspired the development of many legged robots and associated research works seeking for dynamic legged locomotion in robots. This talk explores the design and control of a small-scale quadrupedal robot prototype for dynamic motions. Here we present a hardware-software co-design scheme for the proprioceptive actuator and the model predictive control (MPC) framework for a wide variety of dynamic motions
Bio: Yanran Ding is a 4-th year Ph.D. student in the Mechanical Science and Engineering Department at University of Illinois at Urbana-Champaign. He received his B.S. degree in Mechanical Engineering from UM-SJTU joint institute, Shanghai Jiao Tong University, Shanghai, China in 2015. His research interests include the design of agile robotic systems and optimization-based control for legged robots to achieve dynamic motions. He is one of the best student paper finalists in the International Conference on Intelligent Robots and Systems (IROS) 2017.
Adapt-to-Learn: Policy Transfer in Reinforcement Learning
November 15th, 2019
Abstract: Efficient and robust policy transfer remains a key challenge in reinforcement learning. Policy transfer through warm initialization, imitation, or interacting over a large set of agents with randomized instances, have been commonly applied to solve a variety of Reinforcement Learning (RL) tasks. However, this is far from how behavior transfer happens in the biological world: Humans and animals are able to quickly adapt the learned behaviors between similar tasks and learn new skills when presented with new situations. We introduce a principled mechanism that can “Adapt-to-Learn”, that is adapt the source policy to learn to solve a target task with significant transition differences and uncertainties. We show through theory and experiments that our method leads to a significantly reduced sample complexity of transferring the policies between the tasks.
Bio: Girish Joshi is graduate student at DASLAB UIUC working under Dr. Girish Chowdhary. Prior to joining UIUC he did his masters in Indian Institute of Science Bangalore. His research interest are in Sample Efficient Policy Transfer in RL, Cross Domain skill transfer in RL, Information enabled Adaptive Control for Cyber-Physical Systems and Bayesian Nonparametric Approach in Adaptive Control and Decision making in Non-Stationary Environment.
Designing Robots to Support Successful Aging: Potential and Challenges
Professor, Kinesiology and Community Health
Human Factors and Aging Laboratory
November 8th, 2019
Abstract: There is much potential for robots to support older adults in their goal of successful aging with high quality of life. However, for human-robot interactions to be successful, the robots must be designed with user needs, preferences, and attitudes in mind. The Human Factors and Aging Laboratory (www.hfaging.org) is specifically oriented toward developing a fundamental understanding of aging and bringing that knowledge to bear on design issues important to the enjoyment, quality, and safety of everyday activities of older adults. In this presentation, I will provide an overview of our research with robots: personal, social, telepresence. We focus on the human side of human-robot interaction, answering questions such as, are older adults willing to interact with a robot? What do they want the robot to do? To look like? How do they want to communicate with a robot? Through research examples, I will illustrate the potential for robots to support successful aging as well as the challenges that remain for the design and widespread deployment of robots in this context.
Bio: Wendy A. Rogers, Ph.D. – Shahid and Ann Carlson Khan Professor of Applied Health Sciences at the University of Illinois Urbana-Champaign. Her primary appointment is in the Department of Kinesiology and Community Health. She also has an appointment in the Educational Psychology Department and is an affiliate faculty member of the Beckman Institute and the Illinois Informatics Institute. She received her B.A. from the University of Massachusetts – Dartmouth, and her M.S. (1989) and Ph.D. (1991) from the Georgia Institute of Technology. She is a Certified Human Factors Professional (BCPE Certificate #1539). Her research interests include design for aging; technology acceptance; human-automation interaction; aging-in-place; human-robot interaction; aging with disabilities; cognitive aging; and skill acquisition and training. She is the Director of the Health Technology Graduate Program; Program Director of CHART (Collaborations in Health, Aging, Research, and Technology); and Director of the Human Factors and Aging Laboratory (www.hfaging.org). Her research is funded by: the National Institutes of Health (National Institute on Aging) as part of the Center for Research and Education on Aging and Technology Enhancement (www.create-center.org); and the Department of Health and Human Services (National Institute on Disability, Independent Living, and Rehabilitation Research; NIDILRR) Rehabilitation Engineering Research Center on Technologies to Support Aging-in-Place for People with Long-term Disabilities (www.rercTechSAge.org). She is a fellow of the American Psychological Association, the Gerontological Society of America, and the Human Factors and Ergonomics Society.
Towards Soft Continuum Arms for Real World Applications
Naveen Kumar Uppalapati
November 1st, 2019
Abstract: Soft robots are gaining significant attention from the robotics community due to their adaptability, safety, lightweight construction, and cost-effective manufacturing. They have found use in manipulation, locomotion, and wearable devices. In manipulation, Soft Continuum Arms (SCAs) are used to explore uneven terrains, handle objects of different sizes, and interact safely with the environment. Current SCAs uses a serial combination of multiple segments to achieve higher dexterity and workspace. However, serial architecture leads to an increase in overall weight, hardware, and power requirements, thus limiting their use for real-world applications. In this talk, I will give an insight into the design of compact and lightweight SCAs. The SCAs use pneumatically actuated Fiber Reinforced Elastomeric Enclosures (FREEs) as their building blocks. A single section BR2 SCA design is shown to have greater dexterity and workspace than the current state of art SCAs. I will present a hybrid between the soft arm and rigid links known as Variable Length Nested Soft (VaLeNS) arm, which was designed to obtain the attributes of stiffness modulation and force transfer. Finally, I will present a mobile robot prototype for berry picking application.
Bio: Naveen Kumar Uppalapati is a 6th year Ph.D. student in the Dept. of Industrial and Enterprise Systems Engineering at the University of Illinois. He received his bachelor’s degree in Instrumentation and Control Engineering in 2013 from the National Institute of Technology, Tiruchirappalli, and Master’s degree in Systems engineering in 2016 from the University of Illinois. His research interests are design and modeling of soft robots, sensors design, and controls.
Toward Human-like Teleoperated Robot Motion: Performance and Perception of a Choreography-inspired Method in Static and Dynamic Tasks for Rapid Pose Selection of Articulated Robots
November 1st, 2019
Abstract: In some applications, operators may want to create fluid, human-like motion on a remotely-operated robot, for example, a device used for remote telepresence in hostage negotiation. This paper examines two methods of controlling the pose of a Baxter robot via an Xbox One controller. The first method is a joint-by-joint (JBJ) method in which one joint of each limb is specified in sequence. The second method of control, named Robot Choreography Center (RCC), utilizes choreographic abstractions in order to simultaneously move multiple joints of the limb of the robot in a predictable manner. Thirty-eight users were asked to perform four tasks with each method. Success rate and duration of successfully completed tasks were used to analyze the performances of the participants. Analysis of the preferences of the users found that the joint-by-joint (JBJ) method was considered to be more precise, easier to use, safer, and more articulate, while the choreography-inspired (RCC) method of control was perceived as faster, more fluid, and more expressive. Moreover, performance data found that while both methods of control were over 80\% successful for the two static tasks, the RCC method was an average of 11.85\% more successful for the two more difficult, dynamic tasks. Future work will leverage this framework to investigate ideas of fluidity, expressivity, and human-likeness in robotic motion through online user studies with larger participant pools.
Bio: Allison Bushman is a second-year master’s student in the Dept. of Mechanical Engineering and Materials Science at the University of Illinois at Urbana-Champaign. She received her bachelor’s degree in mechanical engineering in 2014 from Yale University. She currently works in the RAD Lab to understand what parameters are necessary in deeming a movement as natural or fluid, particularly as it pertains to designing movement in robots.
Dynamic Synchronization of Human Operator and Humanoid Robot via Bilateral Feedback Teleoperation
Autonomous humanoid robots are still far from matching the sophistication and adaptability of human’s perception and motor control performance. To address this issue, I investigate the utilization of human whole-body motion to command a remote humanoid robot in real-time, while providing the operator with physical feedback from the robot’s actions. In this talk, I will present the challenges of virtually connecting the human operator with a remote machine in a way that allows the operator to utilize innate motor intelligence to control the robot’s interaction with the environment. I present pilot experiments in which an operator controls a humanoid robot to perform power manipulation tasks, such as swinging a firefighter axe to break a wall, and dynamic locomotion behaviors, such as walking and jumping.
Joao Ramos is an Assistant Professor at the University of Illinois at Urbana-Champaign. He previously worked as a Postdoctoral Associate working at the Biomimetic Robotics Laboratory, at the Massachusetts Institute of Technology. He received a PhD from the Department of Mechanical Engineering at MIT in 2018. During his doctoral research, he developed teleoperation systems and strategies to dynamically control a humanoid robot utilizing human whole-body motion via bilateral feedback. His research focus on the design and control of robotic systems that experiences large forces and impacts, such as the MIT HERMES humanoid, a prototype platform for disaster response. Additionally, his research interests include human-machine interfaces, legged locomotion dynamics, and actuation systems.
Some Thoughts on Learning Reward Functions
Vector Institute in Toronto
In reinforcement learning (RL), agents typically optimize a reward function to learn a desired behavior. In practice, crafting reward functions that produce intended behaviors is fiendishly difficult. Due to the curse of dimensionality, sparse rewards are typically too difficult to optimize without carefully chosen curricula. Meanwhile, dense reward functions often encourage unintended behaviors or present overly cumbersome optimization landscapes. To handle these problems, a vast body of work on reward function design has emerged. In this talk, we will recast the reward function design problem into a learning problem. Specifically, we will consider two new algorithms for automatically learning reward functions. First, in Evolved Policy Gradients (EPG), we will carefully consider the problem of meta-learning reward functions. Given a distribution of tasks, can we meta-learn a parameterized reward function that generalizes to new tasks? Does this learned reward allow the agent to solve new tasks more efficiently than our original hand-designed rewards? Second, in Learning Intrinsic Rewards as a Bi-Level Optimization Problem, we consider the problem of learning a more effective reward function in the single-task setting. By using Self-Tuning Networks and tricks from the hyper-parameter optimization literature, we develop an algorithm that produces a better optimization landscape for the agent to learn against. This better optimization landscape ultimately allows the agent to achieve superior performance on a variety of challenging locomotion tasks, when compared to simply learning against the original hand-designed reward.
Bradly Stadie is a postdoctoral researcher at the Vector Institute in Toronto, where he works with Jimmy Ba’s group. Bradly’s overarching research goal is to develop algorithms that allow machines learn as quickly and flexibly as humans do. At Toronto, Bradly has worked on a variety of topics including reward function learning, causal inference, neural network compression, and one shot imitation learning. Earlier in his career, he provided one of the first algorithms for efficient exploration in deep reinforcement learning. Bradly completed his PhD under Pieter Abbeel at UC Berkeley. He received his undergraduate degree in mathematics from the University of Chicago.
Human-like Robots and Robotic Humans: Who Engineers Who?
Associate Professor, School of Art + Design / NCSA
For a while now we’ve watched robots regularly take on new human tasks, especially those that can be made algorithmic such as vacuuming the floor. But the same time frame has also seen growing numbers of experiments with artistic robots, machines made by artists that take on aesthetic tasks of production in art or music. This talk will focus on the complicated relationship between humans and machines by looking at a number of artworks by the author. These will include not only art making robots that many perceive as increasingly human, but also code-based manipulations of popular software systems that reveal how humans are becoming increasingly robotic. In an era when machines act like humans and humans act like machines, who is engineering who?
Artist Ben Grosser creates interactive experiences, machines, and systems that examine the cultural, social, and political effects of software. Recent exhibition venues include the Barbican Centre in London, Museum Kesselhaus in Berlin, Museu das Comunicações in Lisbon, and Galerie Charlot in Paris. His works have been featured in The New Yorker, Wired, The Atlantic, The Guardian, The Washington Post, El País, Libération, Süddeutsche Zeitung, and Der Spiegel. The Chicago Tribune called him the “unrivaled king of ominous gibberish.” Slate referred to his work as “creative civil disobedience in the digital age.” His artworks are regularly cited in books investigating the cultural effects of technology, including The Age of Surveillance Capitalism, The Metainterface, Facebook Society, and Technologies of Vision, as well as volumes centered on computational art practices such as Electronic Literature, The New Aesthetic and Art, and Digital Art. Grosser is an associate professor in the School of Art + Design, co-founder of the Critical Technology Studies Lab at NCSA, and a faculty affiliate with the Unit for Criticism and the School of Information Sciences. https://bengrosser.com
Multi-Contact Humanoid Fall Mitigation In Cluttered Environment
October 4, 2019
Humanoid robots are expected to take critical roles in our real-world in the future. However, this dream cannot be achieved until a reliable fall mitigation strategy has been proposed and validated. Due to a high center of mass position of the robot, robots have a high risk of falling down to the ground. In this case, we would like robot to utilize its nearby environment object for fall protection. This presentation discusses about my past work on robot fall protection with planning multi-contact fashion for fall recovery in cluttered environment. We believe that the capability of making use of robot’s contact(s) contributes an effective solution for fall protection.
Shihao Wang is a 4th-year Ph.D. student of Department of Mechanical Engineering and Materials Science at Duke University. He is originally from China, received Bachelor degree in Mechanical Engineering from Beihang University in June 2014 and received Master degree in Mechanical Engineering from Cornell University in June 2015. After one year research at Penn State University, he joined Duke in Fall 2016 for Ph.D, research which is focused on Robotics, Legged Locomotion, Dynamic Walking and Controls.
Optimal Control Learning by Mixture of Experts
October 4, 2019
Optimal control problems are critical to solve for task efficiency. However, the nonconvexity limits its application, especially in time-critical tasks. Practical applications often require a parametric optimization problem being solved which is essentially a mapping from problem parameters to problem solutions. We study how to learn this mapping from offline precomputation. Due to the existence of local optimum, the mapping may be discontinuous. This presentation discusses how to use the mixture of experts model to learn this discontinuous function accurately to achieve high reliability in robotic applications.
Gao Tang is a 4-th year Ph.D. student of the Department of Computer Science at the University of Illinois at Urbana-Champaign. Before coming to UIUC, he spent 3 years as a Ph.D. student at Duke University. He received Bachelor and Master degree from Tsinghua University in Aerospace Engineering. His research is focused on numerical optimization and motion planning.
Bioinspired Aerial and Terrestrial Locomotion Strategies
Assistant Professor, Mechanical Science and Engineering
Bio-inspired Adaptive Morphology (BAM) Lab
Sept. 27th, 2019
Nature has evolved various locomotion (self-propulsion) and shape adaptation (morphing) strategies to survive and thrive in diverse and uncertain environments. Both in air and on the ground, natural organisms continue to surpass engineered unmanned aerial and ground vehicles. Key strategies that Nature often exploits include local elasticity and adaptiveness to simplify global actuation and control. Unlike engineered systems, which rely heavily on active control, natural structures tend to also rely on reflexive and passive control. This approach of diverse control strategies yields multifunctional structures. Two examples of multifunctional structures will be presented in this talk, namely avian- inspired deployable structures and click beetle-inspired legless jumping mechanism.
The concept of wings as multifunctional adaptive structures will be discussed and several flight devices found on birds’ wings will be introduced as a pathway towards revolutionizing the current design of small-unmanned air vehicles. Experimental, analytical, and numerical results will be presented to discuss the efficacy of such devices. The discussion of avian-inspired devices will be followed by an introduction of a click beetle-inspired jumping mechanism that exploits distributed springs to circumvent muscle limitations, such a mechanism can bypass shortcomings of smart actuators especially in small-scale robotics applications.
CyPhyHouse: A programming, simulation, and deployment toolchain for heterogeneous distributed coordination
September 20th, 2019
Programming languages, libraries, and development tools have transformed the application development processes for mobile computing and machine learning. CyPhyHouse is a toolchain that aims to provide similar programming, debugging, and deployment benefits for distributed mobile robotic applications. Users can develop hardware-agnostic, distributed applications using the high-level, event driven Koord programming language, without requiring expertise in controller design or distributed network protocols, I will talk about the CyPhyHouse toolchain, its design, implementation, challenges faced and lessons learnt in the process.
Ritwika Ghosh is a 6th year PhD student in the dept. of Computer Science at the University of Illinois. She received her Bachelors in 2013 from Chennai Mathematical Institute in India in 2013 in Math and Computer Science. Her research interests are Formal Methods, Programming Langauges and Distributed Systems.
Controller Synthesis Made Real: Reachavoid Specifications and Linear Dynamics
September 20th, 2019
CSL Studio Rm 1232
The controller synthesis question asks whether an input can be generated for a given system (or a plant) so that it achieves a given specification. Algorithms for answering this question hold the promise of automating controller design. They have the potential to yield high-assurance systems that are correct-by-construction, and even negative answers to the question can convey insights about the unrealizability of specifications. There has been a resurgence of interest in controller synthesis, with the rise of powerful tools and compelling applications such as vehicle path planning, motion control, circuits design, and various other engineering areas. In this talk, I will introduce a novel approach relying on symbolic sensitivity analysis to synthesize provably correct controllers efficiently for large linear systems with reach-avoid specifications. Our solution uses a combination of an open-loop controller and a tracking controller, thereby reducing the problem to smaller tractable problems such as satisfiability over quantifier-free linear real arithmetic. I will also present RealSyn, a tool implementing the synthesis algorithm, which has been shown to scale to several high-dimensional systems with complex reach-avoid specifications.
Chuchu Fan is finishing up her Ph.D. in the Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. She will join the AeroAstro Department of MIT as an assistant professor in 2020. She received her Bachelor’s degree from Tsinghua University, Department of Automation in 2013. Her research interests are in the areas of formal methods and control for safe autonomy.
Dancing With Robots: Questions About Composition with Natural and Artificial Bodies
Amy LaViers – Robotics, Automation, and Dance (RAD) Lab
September 13th, 2019
The formulation of questions is a central yet non-specific activity: an answer can be sought through many modes of investigation, such as, scientific inquiry, research and development, or the creation of art. This talk will outline guiding questions for the Robotics, Automation, and Dance (RAD) Lab, which are explored via artistic creation alongside research in robotics, each vein of inquiry informing the other, and, then, will focus on a few initial answers in the form of robot-augmented dances, digital spaces that track the motion of participants, artistic extensions to student engineering theses, and participatory performances that employ the audience’s own personal machines. For example, guiding questions include: By what measure do robots outperform humans? By what measures do humans outperform robots? How many ways can a human walk? Is movement a continuous phenomenon? Does it convey information? What is the utility of dance? What biases do new technologies hold? What structures can reasonably be named “leg”, “arm”, “hand”, “wing” and the like? Why does dancing feel so different than programming? What does it mean for two distinct bodies to move in unison? How does it feel to move alongside a robot? In order to frame these questions in an engineering context, this talk also presents an information-theoretic model of expression through motion, where artificial systems are modeled as a source communicating across a channel to a human receiver.
Amy LaViers is an assistant professor in the Mechanical Science and Engineering Department at the University of Illinois at Urbana-Champaign (UIUC) and director of the Robotics, Automation, and Dance (RAD) Lab. She is a recipient of a 2015 DARPA Young Faculty Award (YFA) and 2017 Director’s Fellowship. Her teaching has been recognized on UIUC’s list of Teachers Ranked as Excellent By Their Students, with Outstanding distinction. Her choreography has been presented internationally, including at Merce Cunningham’s studios, Joe’s Pub at the Public Theater, the Ferst Center for the Arts, and the Ammerman Center for Arts and Technology. She is a co-founder of two startup companies: AE Machines, Inc, an automation software company that won Product Design of the Year at the 4th Revolution Awards in Chicago in 2017 and was a finalist for Robot of the Year at Station F in Paris in 2018, and caali, LLC, an embodied media company that is developing an interactive installation at the Microsoft Technology Center in Chicago. She completed a two-year Certification in Movement Analysis (CMA) in 2016 at the Laban/Bartenieff Institute of Movement Studies (LIMS). Prior to UIUC she held a position as an assistant professor in systems and information engineering at the University of Virginia. She completed her Ph.D. in electrical and computer engineering at Georgia Tech with a dissertation that included a live performance exploring stylized motion. Her research began in her undergraduate thesis at Princeton University where she earned a certificate in dance and a degree in mechanical and aerospace engineering.