Events

Photo provided by Firefly Drone Shows

Photo provided by Firefly Drone Shows

 

 

“When the department was founded in 1944, unmanned aerial vehicles—drones, were the stuff of science fiction. Today, they’re used in all sorts of applications and represent one of the cutting-edge technologies our researchers and students are working on in the department,” said Professor and Interim Head of the Dept. of Aerospace Engineering Greg Elliott. “As we kick off a year-long celebration of aerospace at Illinois, we are particularly excited to be able to share this professional drone show with the community.”

The show is produced by Firefly Drone Shows and will be customized with a nod to the department’s 75th anniversary.

Much like the Champaign-Urbana fireworks display, the 15-minute light show can be viewed from areas nearby. Special permission has been granted for the public to park in designated lots south of the iHotel – specifically, the Caterpillar, Yahoo, and TekMill parking lots. The map showing the parking lots is available online and on the Dept. of Aerospace Engineering’s Facebook page.

 

 


 

Previous Talks Below – Please see Info Links to learn more

 


Let’s be Flexible: Soft Haptics and Soft Robotics

Allison Okamura

[LINK]

Professor

Stanford University

 

Tuesday September 17, 2019 3pm

190 Eng. Sciences Bldg

 

Abstract:

While traditional robotic manipulators are constructed from rigid links and localized joints, a new generation of robotic devices are soft, using flexible, deformable materials. In this talk, I will describe several new systems that leverage softness to achieve novel shape control, provide a compliant interface to the human body, and access hard-to-reach locations. First, soft haptic devices change their shape and mechanical properties to allow medical simulation and new paradigms for human-computer interface. They can be made wearable by people or by objects in the environment, as needed to assist human users. Second, superelastic materials and 3D-printed soft plastics enable surgical robots that can steer within the human body in order to reach targets inaccessible via the straight-line paths of traditional instruments. These surgical robots are designed on a patient- and procedure-specific basis, to minimize invasiveness and facilitate lowcost interventions in special patient populations. Third, everting pneumatic tubes are used to create robots that can grow hundreds of times in length, steer around obstacles, and squeeze through tight spaces. These plant-inspired growing robots can achieve simple remote manipulation tasks, deliver payloads such as water or sensors in search and rescue scenarios, and shape themselves into useful structures.

 

Robot Motion Planning: From Algorithms to Systems

 

[LINK]

Kris Hauser

Associate Professor,

University of Illinois at Urbana-Champaign

 

Aug 28, 2019   3:30 – 4:30 pm

2405 Siebel Center

 

Abstract:

The ability for a robot to plan its own motions is a critical component of intelligent behavior. The algorithmic questions that underpin motion planning have fascinated computer scientists for decades, due to inherent computational complexity and innumerable ways in which optimal decisions can be approximated. But recently, the application of robots into complex real-world scenarios, such as autonomous driving, are challenging the assumptions of classical theory. Where do models come from? What is the value of planning when plans invariably go wrong? Can optimality be adequately defined? Can we just throw machine learning at the problem and hope it disappears? This talk will outline a modern systems perspective of the role of planning, and how research in motion planning is evolving to suit this new role. Examples will be demonstrated on a diverse range of systems including warehouse automation, legged robot locomotion, and human-controlled robotic avatars.

 

About the Speaker

Kris Hauser is an Associate Professor in the Department of Computer Science and the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. He received his PhD in Computer Science from Stanford University in 2008, bachelor’s degrees in Computer Science and Mathematics from UC Berkeley in 2003, and worked as a postdoctoral fellow at UC Berkeley. He then joined the faculty at Indiana University from 2009-2014, where he started the Intelligent Motion Lab, and then joined the faculty of Duke University from 2014-2019. Prof. Hauser is a recipient of a Stanford Graduate Fellowship, Siebel Scholar Fellowship, Best Paper Award at IEEE International Conference on Humanoid Robots 2015, the NSF CAREER award, and two Amazon Research Awards. He also works as a consultant for Google’s autonomous driving company, Waymo.

 

Intelligent Perception for Dynamic, Tactical Unmanned System Operations

Dr. Anup Parikh

Sandia National Laboratories R&D

Coordinated Science Laboratory

Thursday, June 6th, 2019

11am – noon

CSL Auditorium B01

 

Abstract:

Dynamic, tactical, remote operations in which unmanned systems must manage multiple, changing objectives in the presence of an uncertain, evolving and potentially adversarial environment, without the benefit of prior scripting, present an extreme challenge that necessitates a high degree of autonomy. While the ability to geometrically map and autonomously navigate environments is relatively mature, to autonomously achieve operational goals requires the considerable additional technical leap of abstractly / semantically understanding surroundings. As biological systems have discovered, to efficiently gain this understanding requires not only that sensor observations be intelligently processed over time, but also that sensors be actively controlled to acquire the best knowledge to minimize important uncertainties. The challenges and results of several ongoing projects related to this “active perception” activity will be discussed. These include work in which interior environments are rapidly mapped with both geometric and semantic information, as well as work in which threats are detected, localized, distinguished from false alarms, and identified via autonomous sensor control and real-time classification. Future possibilities for these capabilities will be discussed. The talk will also touch on other R&D in unmanned system autonomy, advanced controls, and novel robotic systems underway at Sandia.

 

About the Speaker

Anup Parikh, Ph.D., is a Senior Member of R&D Staff in the High Consequence Automation and Robotics group at Sandia National Laboratories. His research interests include autonomy and robotics, with an emphasis machine vision, probabilistic estimation, and numerical optimal control. He has been a technical lead on numerous projects including the development of unmanned aerial swarms, coordinated mobile ground sensor platforms, and autonomous perception systems. Dr. Parikh received B.S. degrees in Mechanical and Aerospace Engineering (2012), an M.S. degree in Mechanical Engineering (2014), and a Ph.D. in Aerospace Engineering (2016) from the University of Florida, where his research focused on using Lyapunov methods to develop image-based estimation algorithms, as well as developing control-theoretic tools for switched dynamical systems. He has authored more than 20 publications.

 

 


[INFO LINK]

Making Aerial Robotics Safer in the Face of External Disturbances

Mark Mueller,

Assistant Professor, University of California, Berkeley

Department of Aerospace Engineering

Apr 29, 2019   4:00 pm

103 Talbot Lab

 

Abstract:

We present some of our recent results on high-performance aerial robots. First, we present two novel mechanical vehicle configurations: the first is aimed at creating an aerial robot capable of withstanding external disturbances, and the second exploits unactuated internal degrees of freedom for passive shape-shifting, resulting in a simple, agile vehicle capable of squeezing through very narrow spatial gaps. Next, we will discuss results on vibration-based fault detection, exploiting only an onboard IMU to detect and isolate motor faults through vibrations, even if the frequency of the motors is above the Nyquist sampling frequency. Finally, two results pertaining to energy efficiency are presented, one a mechanical modification, and the second an algorithmic adaptation for online adaptation of a vehicle’s cruise speed.

 

 


Safety-Critical Control of Dynamic Robotic Systems

Professor Aaron Ames

California Institute of Technology

Center For Autonomy Distinguished Lecture Series

Apr 11, 2019   4:00 pm

CSL Auditorium (B02)

 

Abstract:

Science fiction has long promised a world of robotic possibilities: from humanoid robots in the home, to wearable robotic devices that restore and augment human capabilities, to swarms of autonomous robotic systems forming the backbone of the cities of the future, to robots enabling exploration of the cosmos.  With the goal of ultimately achieving these capabilities on robotic systems, this talk will present a unified optimization-based control framework for realizing dynamic behaviors in an efficient, provably correct and safety-critical fashion.  The application of these ideas will be demonstrated experimentally on a wide variety of robotic systems, including swarms of rolling and flying robots with guaranteed collision-free behavior, bipedal and humanoid robots capable of achieving dynamic walking and running behaviors that display the hallmarks of natural human locomotion, and robotic assistive devices aimed at restoring mobility.  The ideas presented will be framed in the broader context of seeking autonomy on robotic systems with the goal of getting robots into the real-world—a vision centered on combining able robotic bodies with intelligent artificial minds.

 

 


 

 

Trustworthy Autonomy: Algorithms for Human-Robot Systems

[INFO LINK]

Katie Driggs-Campbell, PhD

Assistant Professor
Electrical and Computer Engineering Department
Coordinated Science Laboratory
University of Illinois at Urbana-Champaign

Coordinated Science Laboratory

Thursday, April 4th, 2019

3pm – 4pm

Siebel Center (Rm 2405)

 

Abstract:

Autonomous systems, such as self-driving cars, are becoming tangible technologies that will soon impact the human experience. However, the desirable impacts of autonomy are only achievable if the underlying algorithms can handle the unique challenges humans present: People tend to defy expected behaviors and do not conform to many of the standard assumptions made in robotics. To design safe, trustworthy autonomy, we must transform how intelligent systems interact, influence, and predict human agents. In this talk, I’ll present on robust driver modeling methodologies for semi- and fully autonomous vehicle decision making and control and present new methods for validating stochastic systems.

 

About the Speaker

Katie Driggs-Campbell has been a member of the ECE Department at Illinois since January 2019. Her research focuses on exploring and uncovering structure in complex human-robot systems to create more intelligent, interactive autonomy. She draws from the fields of optimization, learning & AI, and control theory, applied to human robot interaction and autonomous vehicles. Before arriving on campus she was a postdoctoral research scholar in the Aero-Astro Department in the Stanford Intelligent Systems Lab. Katie received her MS and PhD in electrical engineering and computer science from the University of California, Berkeley.

 

 


 

 

Towards Human-Friendly Robots

Joohyung Kim

Research Scientist in Disney Research

Electrical and Computer Engineering

Thursday, March 28, 2019

10am – 11am

CSL Auditorium (B02)

 

Abstract:

The demand for robots which can work closely and interact physically with human has been growing. Such robots can already be found guiding and entertaining people in some places, such as stores, airports and amusement parks. However, despite advances in many robotic technologies, there are very few robotic applications to meet the public expectations. To make robots help humans in daily life, we need better understanding of human motions and behaviors, better methods to express them through robots, and better design to interact with human naturally and safely.

In this talk, I will present my works to make human-friendly robots by means of motion control, robot design and human-robot interaction. In the first part of my talk, I will start with my study on biped walking, one of human’s important and unique behaviors. The talk will cover various mechanisms and methods I developed for better walking, including the virtual gravity control method in Samsung Electronics. And then I will describe my works in Disney to make interactive robots capturing interesting features from animation characters. In the latter part, I will present my recent works to utilize learning methods for robot control and how these works are related to my research direction.

 

About the Speaker

Joohyung Kim is currently a Research Scientist in Disney Research. His research focuses on design and control for humanoid robots, system for motion learning in robot hardware, and safe human-robot interaction. He received BSE and Ph.D. degrees in Electrical Engineering and Computer Science (EECS) from Seoul National University, Korea, in 2001 and 2012. Prior to joining Disney Research, he was a postdoctoral fellow in the Robotics Institute at Carnegie Mellon University for DARPA Robotics Challenge in 2013. From 2009 to 2012, he was a Research Staff Member in Samsung Electronics, Korea, developing biped walking controllers for humanoid robots.

 

 


Human-in-the-Loop Deep Learning and Beyond

Dr. Sam Ghosal

Iowa State

Coordinate Science Laboratory

CSL Rm 141

Monday the 18th,   2:30pm

 

 


LEARNING AND REASONING WITH VISUAL CORRESPONDENCE IN TIME

Xiaolong Wang

Carnegie Mellon University

Coordinate Science Laboratory

CSL Auditorium (B02)

Mar 13, 2019   10:00 am

 

Abstract:

There is a famous tale in computer vision: once a graduate student asked the famous computer vision scientist Takeo Kanade: “What are the three most important problems in computer vision?” Takeo replied: “Correspondence, correspondence,  correspondence!” Indeed, even for the most commonly applied Convolutional Neural Networks (ConvNets) in computer vision, internally, it is learning to find correspondence across objects or object parts. The way these networks learn this correspondence is via human annotations on millions of static images (e.g., humans will label images as dog, car, etc.) However, this is not how we humans learn. The visual system of an infant develops in a dynamic and continuous environment without using semantics until much later in life.

In this talk, I would argue that we need to go beyond images and exploit the massive amount of correspondence in videos. In videos, we have millions of pixels linked to each other by time. I will discuss how to learn correspondence from continuous observations in videos without any human supervision. Once the correspondence is given, it can be utilized as supervision in training the ConvNets,  eliminating the need for manual labels. Besides supervision, I will show that capturing long-range correspondence is also the key to video understanding. The effectiveness of my approaches will be demonstrated on tasks including object recognition, tracking, and action recognition.

 

About the Speaker

Xiaolong Wang is a final year Ph.D. candidate at The Robotics Institute at Carnegie Mellon University, advised by Abhinav Gupta. His research interests focus on computer vision and machine learning. He has collaborated with research labs including Berkeley AI Research, Facebook AI Research, and Allen Institute for Artificial Intelligence. He is the recipient of Facebook Fellowship, Nvidia Fellowship, and Baidu Fellowship.

 


Service Robots for All

[INFO LINK]

Elaine Short

Postdoctoral Fellow

University of Texas at Austin

Department of Computer Science

2405 Siebel Center

Mar 5, 2019   10:00 am

 

Abstract:

Robots have the unique potential to help people, especially people with disabilities, in their daily lives.  However, providing continuous physical and social support in human environments requires new algorithmic approaches that are fast, adaptable, robust to real-world noise, and can handle unconstrained behavior from diverse users.

This talk will describe my work developing and studying algorithms that enable service robots to make effective use of computation to address the most critical elements of interaction with people, while being flexible enough to support the full richness of human behavior.  This includes developing fast, data-efficient algorithms for group interaction in noisy real-world environments, algorithms for temporal integration of task and social behavior, and understanding how algorithmic choices affect perceptions of robot agency.

Ultimately, these components can come together to create robots that people want to have around, not because they perfectly imitate human behavior, but because they seamlessly blend into the background while making people’s lives easier.  These robots will be capable of improving the lives of many people, but will be a life-changing benefit for people with disabilities for whom human assistance comes at a significant cost to privacy and autonomy.

 

About the Speaker

Elaine Schaertl Short is a postdoctoral fellow in the Socially Intelligent Machines Lab at the University of Texas at Austin.  She completed her PhD under the supervision of Prof. Maja Matarić in the Department of Computer Science at the University of Southern California (USC).  She received her MS in Computer Science from USC in 2012 and her BS in Computer Science from Yale University in 2010.   Elaine is a recipient of a National Science Foundation Graduate Research Fellowship, USC Provost’s Fellowship, and a Google Anita Borg Scholarship.  At USC, she was recognized for excellence in research, teaching, and service: she was awarded the Viterbi School of Engineering Merit Award and the Women in Science and Engineering (WiSE) Merit Award for Current Doctoral Students, as well as the Best Research Assistant Award, Best Teaching Assistant Award, and Service Award from the Department of Computer Science.  At Yale she was the recipient of the Saybrook College Mary Casner Prize.  Her research focuses on building algorithms that enable fast and robust assistive human-robot interaction in schools, homes, crowds and other natural environments.

 


Automation vs. Augmentation – Socially Assistive Robots and the Future of Work

 

[INFO LINK]

Maja Mataric´

Professor of Computer Science, Neuroscience, and Pediatrics

University of Southern California

 

R.T. Chien Lecture

CSL Auditorium (B02)

Feb 27, 2019   3:00 pm

 

Abstract

Robotics has been driven by the desire to automate work, but automation raises concerns about the future of work. No less important are the implications on human health, as the science on longevity and resilience indicates that having the drive to work is key for health and wellness.  However, robots are also becoming helpful by not doing any physical work at all, but instead by motivating and coaching us to do our own work, based on evidence from neuroscience and behavioral science demonstrating that human behavior is most strongly influenced by physically embodied social agents, including robots. The field of socially assistive robotics (SAR) focuses on developing intelligent socially interactive machine that that provide assistance through social rather than physical means. This talk will describe research into embodiment, modeling and steering social dynamics, and long-term adaptation and learning for SAR. SAR systems have been validated with a variety of user populations, including stroke patients, children with autism spectrum disorders, elderly with Alzheimers and other forms of dementia; this talk will cover the short, middle, and long-term commercial applications, as well as the frontiers of SAR research.

 


 

What’s Hard in Self Driving?

 

[Info Link]

Brandon Basso

Director of Autonomy

Uber Advanced Technologies Group (ATG)

 

Center for Autonomy Distinguished Lecture Series

Coordinate Science Laboratory

NCSA Auditorium

Feb 20, 2019   4:00 pm

 

Abstract

Self-driving cars have the potential to bring efficient, safe, and low-cost mobility to people around the world. At Uber ATG we’ve been working diligently to make this future a reality. We have collected millions of miles of autonomous driving data and have completed tens of thousands of passenger trips in multiple cities in the US. So, what’s the state of self-driving cars today?

 

This talk will explore several practical challenges in bridging  the gap between the current state of the technology  and viable self-driving cars of the  future. We will discuss several example challenges in perception, motion prediction, planning, and control that we are solving at Uber ATG, challenges we believe must be addressed before self-driving cars become a reality.

 

About the Speaker

Brandon Basso is a Director of Autonomy at Uber Advanced Technologies Group (ATG). He leads autonomous vehicle capabilities development across perception, motion planning, and control software engineering groups. He previously led software engineering on ATG’s autonomous trucking project. In addition to developing new vehicle capabilities, Brandon focuses on software safety and testing across the entire onboard and off board stack. Brandon’s previous experience is in aviation and the drone industry. He was the VP of Engineering at 3D Robotics and led the development of several successful consumer and commercial drone projects. He additionally worked at Honeybee robotics where he was a member of the Mars Exploration Rover engineering team, designing and operating tooling still in use today on several Mars missions, including Spirit, Opportunity, Curiosity, and Phoenix. Brandon received his bachelor’s degree from Columbia University and PhD from University of California, Berkeley, both in mechanical engineering.  While at Berkeley, Brandon studied control theory and was a member of the Centre for Collaborative Control of Unmanned Vehicles (C3UV) under Karl Hedrick. His research focused on representing the vehicle routing problem as a semi-Markov decision process and learned solutions to routing, scheduling, and dispatch problems.

 


 

Choosing a Planner for My Self-driving Vehicle

[Info Link]

Alex Ansari

Autonomy Capabilities Lead

Uber ATG

 

Center for Autonomy

Coordinate Science Laboratory (CSL)

CSL Auditorium (B02)

Feb 20, 2019   11:00 am

 

Abstract

There are plenty of options and algorithms when it comes to motion planning.  As of yet, there is no clear, best approach in self driving.  This talk aims to provide a practical, high-level view of the trade-offs, challenges, and design considerations that should be accounted for in selecting a planning strategy.  Following a system overview of Uber ATG’s self-driving vehicles, we walk through examples that illustrate the types of scenarios that our current and future planning algorithms need to address.

 

About the Speaker

Alex Ansari leads the Autonomy Capabilities Team at Uber ATG.  With members representing all autonomy disciplines, his team focuses on developing larger, cross-functional capabilities, e.g., navigating construction sites. Prior to Uber, Alex developed his background in model-based methods for robotic motion control at Northwestern University, where he received his M.S. and Ph.D degrees.  He went on to a postdoc at Carnegie Mellon University’s Biorobotics Lab to explore model-free, ML/AI-based alternatives for decision-making. He is passionate about combining these two worlds and finding ways to leverage structure for sample-efficiency and improved performance in real-world AI problems.

 


 

Free as a Bird: Bird Inspired structures to improve agility in engineered unmanned aerial vehicles

[Info Link]

Professor Aimy Wissa

Mechanical Science & Engineering

University of Illinois Urbana-Champaign

 

Saturday Engineering for Everyone

1002 Grainger Auditorium, ECE Building

Feb 16, 2019   10:15 am

Sponsor: Electrical &Computer Engineering

 

Abstract

For thousands of years, bird flight have inspired and challenged our imaginations and dreams.  Man has aspired to build machines to help him fly like birds.  Even today, there are still significant efforts unerway focused on understanding the physics of avian flight.  There is an increasing need for small aerial robots to conduct a variety of civilian and military mission scenarios.  This talk starts by showing that avian-inspired flight has the potential to combine the desired capabilities of hovering, maneurverability, agility, safety, and stealth.  The concept of wings as multifunctional adaptive structures will be discussed and several flight devices found on birds’ wings will be introduced as a pathway towards revolutionizing the current design of small unmanned air vehicles.

 


 

The Annual CSL Student Conference

Robotics Demonstration Session

February 8, 2019

Make sure to register!

 

The Annual CSL Student Conference invites grad students and post-docs to exhibit their robots in an event similar to a poster session. This session provides a platform for students across the campus to gain exposure for their research, and to disseminate and exchange ideas. The aim is to stimulate interaction among researchers, and hopefully foster a close bound in the robotic society of our university.

This half-day session consists of multiple robot demonstrations. The content could range from showing simple functionality of a robot to demonstrating some algorithm implemented on a robot platform. Presenters will also have the opportunity to use the Flight Arena with the Vicon system, should their demonstration requires a large open space.

Lunch will also be provided!

 

2019 Participants

  • Package Delivery System for Last Mile Problems, Gabriel Barsi Haberfeld, Arun Lakshmanan

  • Soft untethered origami crawling robot, Oyuna Angatkina, Sumanyu Singh

  • Robust decentralized control with UWB based P2P communication, Joao Paulo Jansch-Porto

  • Real Time Haptic Feedback for Robotic Surgery, Xiao Li, Shankar Deka

  • Taxonomy-based, Task-specific Teleoperation on Rethink Robotics Baxter, Austin Hamlett

  • Haptics-based Rehabilitation of ADL Skills, Shrey Pareek

  • Upper-limb robotic training simulator for spasticity assessment, Yinan Pei, Kevin Gim

  • B2: A Biologically Inspired Robotic bat, Usman Syed

  • A Distributed and Scalable Electromechanical Actuator for Bio-inspired Robots, Bonhyun Ku, Sunyu Wang

  • Design and Control of a Small Quadruped, Yanran Ding

  • Urban Swarm Navigation Using UAVs, Matt Peretic, Sriramya Bhamidipati, Shubh Gupta

  • Robots in Agriculture, DAS Lab

 


 

Fish Robotics: Inspiration from Fishes for the Design of Mechanical Devices

 

Professor George V. Lauder
Henry Bryant Bigelow Professor
Harvard College Professor
Professor of Organic and Evolutionary Biology
Harvard University

 

MechSE Seminar

February 8, 2018  12:00 p.m.
2005 Mechanical Engineering Lab (Deere)

 

Abstract
There are over 35,000 species of fishes, and a key feature of this remarkable evolutionary diversity is the variety of propulsive systems used by fishes for swimming in the aquatic environment. Fishes have numerous control surfaces which act to transfer momentum to the surrounding fluid. In this presentation I will discuss the results of recent experimental kinematic and hydrodynamic studies of fish locomotor function, and the implications for construction of robotic models of fishes. Recent high-resolution video analyses of fish fin movements during locomotion show that fins undergo much greater deformations than previously suspected and fish fins possess an clever active surface control mechanism. Fish body and fin motion results in the formation of vortex rings of various conformations, and quantification of vortex rings shed into the wake by freely-swimming fishes has proven to be useful for understanding the mechanisms of propulsion. Experimental analyses of propulsion in freely-swimming fishes have led to the development of a variety of self-propelling robotic models. Data from these devices will be presented and discussed in terms of the utility of using robotic models for understanding fish locomotor dynamics, and for studying the function of specialized fish surface structures like shark skin.

 

About the Speaker
George V. Lauder received the A.B. and Ph.D. degrees in biology from Harvard University in 1976 and 1979 respectively. From 1979 to 1981 he was a Junior Fellow in the Society of Fellows at Harvard. Since 1999 he has been Professor of Organismic and Evolutionary Biology at Harvard University. His research interests focus on the biomechanics and evolution of fishes, with a special focus on laboratory analyses of kinematics, muscle function, and hydrodynamics of freely-swimming fishes. Current work involves application of analyses of fish locomotor function to the design of fish-like robotic biorobotic test platforms.

 


 

Specification-Driven Design for Modular and Safe Robotics

 

Dr. Petter Nilsson
California Institute of Technology
Pasadena, CA

 

MechSE Seminar

Wednesday, February 6, 2019, at 4:00 p.m
2005 Mechanical Engineering Lab (Deere)

 

Abstract
Robotic systems of tomorrow will be increasingly interconnected and operate amongst us, which implies a two-fold engineering challenge of great complexity and no tolerance for mistakes. In this talk I will argue that formally written specifications should be a core component of system design, and talk about how specifications can promote modularity and resilience in robotic systems.

The first part of the talk will be centered on safety-critical control via invariance: I will show how invariance specifications in the form of assume-guarantee contracts can be leveraged to decompose problems and thus enable modular design, and how certificates for invariance can be used to formally relate low-level dynamics to a high-level abstract roadmap for planning. The second part of the talk will cover specification-guided methods for multi-robot systems, and how problem structure can be leveraged to overcome scalability challenges. Two types of structure will be considered: permutation symmetries in counting problems with applications in multi-robot coordination, and planning in sparsely connected networks of MDPs with applications to cooperative Mars exploration. The talk will be concluded with a few words about current research topics and directions for the future.

 

About the Speaker
Petter Nilsson received his B.S. in Engineering Physics in 2011, and his M.S. in Optimization and Systems Theory in 2013, both from KTH Royal Institute of Technology in Stockholm, Sweden, and his Ph.D. in Electrical Engineering in 2017 from the University of Michigan. In addition to his technical degrees, he holds a B.S. in Business and Economics from the Stockholm School of Economics.

He is currently a postdoctoral scholar at the California Institute of Technology where he conducts research on specification-driven control and autonomy for safety-critical cyber-physical systems, with applications in autonomous driving, space exploration, and multi-agent coordination.

 


 

Responding to Physical Human-Robot Interactions

 

Dr. Dylan Losey
Artificial Intelligence Lab
Stanford University, Stanford, CA

 

MechSE Seminar

Monday, February 4, 2019, at 4:00 p.m
2005 Mechanical Engineering Lab (Deere)

 

Abstract
For robots to successfully transition from isolated factory floors into everyday human environments, these robots must be able to facilitate meaningful physical interactions with their human end-users. Physical interaction—i.e., pushing, pulling, and guiding—provides a natural way for humans and robots to collaborate and communicate; however, it is not yet clear how robots should respond to these physical exchanges. In this talk, I will explore ways we can leverage control, interaction, learning, and teaching to develop intelligent and autonomous responses to physical human-robot interaction. First, I will discuss how robots share control during physical interactions, and how we should select control strategies using Lyapunov stability analysis and system passivity to ensure that both the human and robot participate in collaborative tasks. Next, I will introduce robots that learn by interpreting physical interactions as corrections, instead of disturbances. Within this formulation, robots learn from physical corrections online—via inverse reinforcement learning—and respond to these human corrections by adjusting their behavior for the rest of the current task. I will conclude by describing some of the practical limitations of this ongoing work, as well as highlighting my future research directions. Throughout the talk, I apply our theoretical developments to settings where autonomous systems intersect physical interaction: robotic rehabilitation, compliant actuation, and personal robots.

 

About the Speaker
Dr. Dylan Losey is a postdctooral scholar at Stanford University within the Artificial Intelligence Lab, where he studies human-robot interaction. Dylan received his Ph.D. in Mechanical Engineering from Rice University in December 2018, his M.S. in Mechanical Engineering from Rice University in May 2016, and his B.E. in Mechanical Engineering from Vanderbilt University in May 2014. Between May and August 2017 he was also a visiting scholar at the University of California, Berkeley, where he worked in the Berkeley Artificial Intelligence Research Lab. Dylan researches robotics at the intersection of human-robot interaction, machine learning, and optimal control. He won the 2017 IEEE/ASME Transactions on Mechatronics Best Paper Award, and was a National Science Foundation Graduate Research Fellow.

 


 

Manipulating Interfacial Physics for Novel Multimodal and Multiphase Insect-Scale Robots

 

Dr. Kevin Chen
Microrobotics Laboratory and the Materials Discovery & Applications Group
Harvard University, Cambridge, MA

 

MechSE Seminar

Monday, January 28, 2019, at 4:00 p.m.
2005 Mechanical Engineering Laboratory

 

Abstract
Several insect species, such as diving flies and diving beetles, exhibit remarkable locomotive capabilities in aerial, aquatic, and terrestrial environments, inspiring the development of similar capabilities in robots at the centimeter scale. In this talk I will present two insect-scale robots capable of multimodal and multiphase locomotion. I will start by presenting a 175mg, flapping wing robot that can hover in air, swim underwater, and impulsively jump out of the water surface through combustion. I will also introduce a 1.6g, quadrupedal robot capable of locomotion on land, on the surface of water, underwater, and between these environments. These results demonstrate that microrobots can achieve novel functions that are absent in larger, traditional robots, thereby showing the unique potential of microrobots in applications such as inspection and environmental exploration in cluttered spaces. I will further discuss major challenges and future directions in microrobotic research. Towards developing a swarm of robust and multifunctional insect-scale robots, I am working on creating a new class of microrobots – ones that are powered by high bandwidth soft actuators and equipped with rigid appendages for interactions with environments.

 

About the Speaker
Kevin Chen is a postdoctoral fellow at the Microrobotics Laboratory and the Materials Discovery and Applications Group at Harvard University. He received his PhD in Mechanical Engineering at Harvard University under the supervision of Professor Robert J. Wood. His work focuses on developing insect-scale robots capable of locomotion and transition between air, land, and water. His research interests also include developing high bandwidth and robust soft actuators for microrobot manipulation and locomotion. Kevin received his bachelor’s degree in Applied and Engineering Physics from Cornell University. He is a recipient of the best student paper award at the International Conference on Intelligent Robots and Systems (IROS) 2015, a Harvard Teaching Excellence Award, and he was recently named to the “Forbes 30 Under 30” list in the category of Science.

 


Autonomous Systems and Recent Progress in the Adaptive Control Applications

 

Dr. Kevin A. Wise
Senior Technical Fellow

The Boeing Company
St. Louis, Missouri

 

1st Annual MechSE Distinguished Alumnus Seminar

January 15, 2019  3:00 p.m.

190 Engineering Sciences Building

 

Abstract
This presentation will discuss autonomous systems in aerospace, and recent progress in the development and fielding of adaptive control systems. Autonomous systems have secured a unique and expanding role in commercial and defense applications. They are changing our world. The talk will discuss some of the history that has led us to where we are today, and some of the new systems and challenges we face going forward. Aerospace and the oil and gas industries offer some of the greatest control challenges faced by the control community. Unstable, non-minimum phase system dynamics, and models that are highly uncertain and expensive to learn/measure. This talk will also present some recent progress in developing and using robust and adaptive control methods for flight and drilling operations.

 

About the Speaker
Kevin A. Wise is a Senior Technical Fellow, Advanced Flight Controls, in the Phantom Works division of The Boeing Company, is President and CEO of Innovative Control Technologies, LLC, and is a Chief Advisor at Kelda Drilling Controls in Norway. He received his BS, MS, and Ph.D. in Mechanical Engineering from the University of Illinois in 1980, 82, and 87, respectively. Since joining Boeing in 1982, he has developed vehicle management systems, flight control systems, and control system design tools and processes for advanced manned and unmanned aircraft and weapon systems. Some recent programs include KC-46 Tanker boom, Dominator UAS, Phantom Eye Hydrogen Powered UAS, QF-16 Full Scale Aerial Target, X-45 J-UCAS, X-36, and JDAM. His research interests include intelligent autonomy and battle management, aircraft dynamics and control, robust adaptive control, optimal control, robustness theory, and intelligent drilling solutions. He has authored more than 100 technical articles and seven book chapters; he has published a textbook titled Robust and Adaptive Control Theory, with Aerospace Examples; and he teaches control theory at Washington University in St. Louis. Dr. Wise is a member of the National Academy of Engineering, is an IEEE Fellow, and Fellow of the AIAA.

 


 

DCL Seminar Series: Goldie Nejat “Can I be of Assistance?”: Socially Assistive Robots and the Future of Healthy Aging

 

Wednesday, November 28, 2018

3:00 p.m. to 4:00 p.m.

1232 CSL Studio


Goldie Nejat, PhD, PEng

Canada Research Chair in Robots for Society

Associate Professor and Director of the Autonomous Systems and Biomechatronics Laboratory (ASBLab)

Director of the Institute for Robotics and Mechatronics (IRM)

University of Toronto, Canada

 

Abstract:

The world is experiencing a silver tsunami: rapid population aging. As the world’s elderly population significantly increases, dementia is becoming one of the fastest growing diseases, with no cure in sight. Robots are seen as a unique strategic technology that will become an important part of society, aiding people in everyday life, in order to meet the urgent and immediate needs of an aging population. This talk will present some of my group’s recent research efforts in developing intelligent assistive robots to improve quality of life and promote independence (aging-in-place) of older adults, including those living with dementia. In particular, I will discuss our BrianCasperTangy, Blueberry and Leia socially assistive robots that have been designed to autonomously provide cognitive and social interventions, help with activities of daily living, and lead group recreational activities in human-centered environments. These robots can serve as assistants to individuals as well as groups of users, while learning to personalize their interactions to the needs of the users. Numerous user studies conducted with older adults in care settings will also be discussed to highlight how these robots can effectively be integrated into people’s everyday lives. Lastly, I will also discuss some of our research efforts in developing learning-based semi-autonomous rescue robots for exploring unknown cluttered and dangerous disaster environments and finding potential victims in order to aid rescue workers. These controllers uniquely allow robots to learn and make optimal decisions regarding which tasks to perform and when during search missions, as well as determine when human intervention is required.

 

Bio:

Goldie Nejat, PhD, P.Eng. is the Canada Research Chair in Robots for Society and the Director of the Institute for Robotics and Mechatronics (IRM) at the University of Toronto. She is an Associate Professor in the Department of Mechanical & Industrial Engineering and the Founder and Director of the Autonomous Systems and Biomechatronics Laboratory (asblab.mie.utoronto.ca).  She is also an Adjunct Scientist at the Toronto Rehabilitation Institute. Prof. Nejat received both her B.A.Sc. and Ph.D. degrees from the University of Toronto.

Prof. Nejat’s research focuses on developing intelligent service robots for applications in health, elderly care, emergency response, search and rescue, security and surveillance, and manufacturing. She has been invited to speak about her research to scientists, healthcare professionals, policy-makers, governments and the general public at many events and institutions around the world. She has served on the organizing and program committees of over thirty international conferences on robotics, automation, human-robot interaction and medical devices. Prof. Nejat is also an Associate Editor for IEEE Robotics and Automation Letters, and IEEE Transactions on Automation Science and Engineering. Her team’s work has been presented in over 90 media stories including in Time magazine, Bloomberg, NBC News, the Telegraph, Reader’s Digest, Zoomer magazine, and the Discovery Channel. In 2013, she received the Engineers Canada Young Engineer Achievement Award and in 2012, she received the Professional Engineers of Ontario Young Engineer Medal, both awards are for her exceptional achievements in the field of robotics at a young age.