Robotics Seminar

2024 Seminar Speakers

We have an outstanding lineup of speakers for the semester and we invite the MIT community to attend the seminars and meet the speakers (3-4pm ET, followed by a reception at 4pm ET).

  • Feb 16: Russ Tedrake (MIT)
  • Feb 23: Chien-Ming Huang  (Johns Hopkins)
  • Mar 8: Mark Cutkosky (Stanford)
  • Mar 15: Nikolay Atanasov (UCSD) – Grier Room (34-401)
  • Mar 22: TBD – Grier Room (34-401)
  • Apr 5: Soon-Jo Chung (Caltech) – Grier Room (34-401)
  • Apr 8: Naira Hovakimyan (UIUC) – Grier Room (32-155) – joint seminar with LIDS!
  • Apr 12: Florian Shkurti (Toronto) – Patil/Kiva Conference Room (32-G449)
  • Apr 19: Stefanie Tellex (Brown) – Patil/Kiva Conference Room (32-G449)
  • Apr 26: Aaron Parness (Amazon Robotics) – Grier Room (34-401) 
  • May 3: Ritu Raman (MIT) – Grier Room (34-401)

2023 Seminar Speakers 


2022 Seminar Speakers 

  • Mar 11: Pulkit Agrawal (MIT)
  • Mar 18: Jeff Ichnowski (CMU)
  • Apr 1: Stephanos Nikolaidis (USC)
  • Apr 8: Marco Hutter (ETH)
  • Apr 15: Katherine Kuchenbecker (Max Planck)
  • Apr 22: Harry Asada (MIT)
  • Apr 29: Ayanna Howard (OSU)
  • May 6: Marco Pavone (Stanford)
  • Oct 14: Hao Su (UCSD)
  • Nov 4: Chuchu Fan (MIT)
  • Nov 18: Kostas Bekris (Rutgers)
  • Dec 2: Yuval Tassa (DeepMind)
  • Dec 16: Phillip Isola (MIT EECS) 

December 16, 2022 - Phillip Isola, MIT EECS, "Giving Robots Mental Imagery" (video)

Imagine you are trying to put a small toy through an M-shaped slot. How would you do it? One strategy would be to imagine, in your mind's eye, rotating the toy until it aligns with the slot, then taking the corresponding action. Psychologists call this ability "mental imagery" — seeing pictures in our head — and have argued that it supports many everyday reasoning tasks. In this talk, I will describe our work on giving robots mental imagery, using neural radiance fields (NeRFs). A NeRF allows the robot to imagine what a scene would look like from any position and angle, and under any camera optics. The robot can use this NeRF as a virtual camera, which moves about mentally to help it estimate a shape's pose, collect training data for a vision system, or find the highest affordance angle by which to grip an object. I will talk about each of these use cases and try to convey why I think NeRFs are as important for robotics as they have already proven to be for graphics.

December 2,2022 - Yuval Tassa, DeepMind, "Predictive Sampling: Real-time behavior synthesis with MuJoCo" (video)

"The last decade has witnessed the rise of model-free, learning-based methods in robotics. These methods have powerful advantages like representation invariance and ease of use, but are inherently slow. In contrast, model-based predictive control methods can synthesize behavior in real time, but are considered difficult to implement and often depend on elaborate optimization algorithms.
 
In this talk I will present Trajectory Explorer, a fully open-source interactive application and software framework for predictive control, based on MuJoCo physics, that lets the user easily author and solve complex robotics tasks. We currently support derivative-based iLQG and Gradient Descent, but also introduce Predictive Sampling, a simple zero-order, sampling-based algorithm that works surprisingly well and is easy to understand. The interactive simulation can be slowed-down asynchronously—effectively speeding up the controller—enabling use on slow machines, leading to a democratization of predictive control tooling. In closing, I will discuss various ways in which model-based and model-free methods can be combined, exploiting the strengths of both approaches.
 
The talk will include a live demo of multiple locomotion and manipulation tasks solved from scratch, in real-time, on a laptop."

November 18, 2022 - Kostas Bekris, Rutgers, "Towards Closing the Perception-Planning and Sim2Real Gaps in Robotics" (video)

Robotics is at the point where we can deploy complete systems across applications, such as logistics, service and field robotics. There are still critical gaps, however, that limit the adaptability, robustness and safety of robots, which lie at: (a) the interface of domains, such as perception, planning/control and learning, that must be viewed holistically in the context of complete robotic systems, and (b) the sim2real gap, i.e., the deviation between internal models of robots’ AI and the real world.

This talk will first describe efforts in tighter integration of perception and planning for vision-driven robot manipulation. We have developed high-fidelity, high-frequency tracking of rigid bodies’ 6D poses - without using CAD models or cumbersome human annotations - by utilizing progress both in deep learning and pose graph optimization. These solutions together with appropriate shared representations, tighter closed-loop operation and – critically - compliant end-effectors are unblocking the deployment of full-stack robot manipulation systems. This talk will provide examples from collaborative efforts on robotic packing, assembly under tight tolerances as well as constrained placement given a single demonstration that generalizes across an object category.

The second part of the talk is motivated by tensegrity robots, which combine rigid and soft elements, to achieve safety and adaptability. They also complicate, however, modeling and control given their high-dimensionality and complex dynamics. This sim2real gap of analytical models made us look into reinforcement learning (RL) for controlling robot tensegrities, which allowed the development of new skills. RL applicability is challenging in this domain, however, due to data requirements. Training RL in simulation is promising but is blocked again by the sim2real gap. For this reason, we are developing differential engines for tensegrity robots that reason about first-principles so as to be trained with few example ground truth trajectories from the real robot. They provide accurate-enough simulations to train a controller that is directly transferrable back to the real system. We report our first success in such a real2sim2real transfer for a 3-bar tensegrity robot.

The talk will conclude with a brief discussion on how closing these gaps empowers the next step of developing robots that are socially cognizant and can be safely integrated into our society.

November 4, 2022 - Chuchu Fan, MIT, "Neural certificates in large-scale autonomy design" (video)

Learning-enabled control systems have demonstrated impressive empirical performance on challenging control problems in robotics, but this performance comes at the cost of reduced transparency and lack of guarantees on the safety or stability of the learned controllers. In recent years, new techniques have emerged to provide these guarantees by learning certificates alongside control policies — these certificates provide concise, data-driven proofs that guarantee the safety and stability of the learned control system. These methods allow the user to verify the safety of a learned controller and provide supervision during training, allowing safety and stability requirements to influence the training process. This talk presents an overview of this rapidly developing field of certificate learning. We hope that this talk will serve as an accessible introduction to the theory and practice of certificate learning, both to those who wish to apply these tools to practical robotics problems and to those who wish to dive more deeply into the theory of learning for control.

October 14, 2022 - Hao Su, UCSD Computer Science, "Generic and Generalizable Manipulation Skill Benchmarking and Learning"  (video)

"Manipulation skills, which can be composed to tackle long-horizon and complex daily chores, are one of the cornerstones of embodied AI. To build robots with general task-solving abilities as humans, as a pre-requisite, robots must possess a diverse set of object manipulation skills (generic), and these skills must apply to objects and configurations that are even unseen (generalizable).
 
Due to the broad scope of the problem, object manipulation requires collaborative research of the community. To foster reproducible, low-cost, and fast-cycle research, my group has been pushing the development of open-source task suites as a community service. In particular, I will introduce the latest ManiSkill2 system, which includes 20 tasks (rigid/soft-body, mobile/stationary, single-arm/dual-arm), 2000+ objects, 4 million+ frames of demos, and can support interaction sample collection at 2000+ FPS/GPU with visual input.
 
The second part of the talk introduces algorithmic efforts from my group. I will first show a 3D RL technique, Frame Mining, that was invented to improve the robustness and sample efficiency of RL with 3D point cloud input when solving the ManiSkill tasks. Then, I will briefly introduce a new continuous RL framework that elegantly connects the classical policy gradient and trajectory return optimization by learning a reparameterized policy with the variational method."
May 6, 2022 - Marco Pavone, Stanford, "Towards Safe, Data-driven Autonomy (video)
AI-powered autonomous vehicles that can learn, reason, and interact with people are no longer science fiction. Self-driving cars, unmanned aerial vehicles, and autonomous spacecraft, among others, are continually increasing in capability and seeing incremental deployment in more and more domains. However, fundamental research questions still need to be addressed in order to achieve full and widespread vehicle autonomy. In this talk, I will discuss our work on addressing key open problems in the field of vehicle autonomy, particularly in pursuit of safe, data-driven autonomy stacks. Specifically, I will discuss (1) robust human prediction models for both simulation and real-time decision making, (2) AI safety frameworks for autonomous systems, and (3) novel, highly integrated autonomy architectures that are amenable to end-to-end training while retaining a modular, interpretable structure. The discussion will be grounded in autonomous driving and aerospace robotics applications.
April 29, 2022 - Ayanna Howard, OSU, "Socially Interactive Robotics for Equitable Outcomes" (video)
It is estimated 15% of children aged 3 through 17 born in the U.S. have one or more developmental disabilities. For many of these children, proper early intervention is provided as a mechanism to support the child’s academic, developmental, and functional goals from birth and beyond. With the recent advances in robotics and artificial intelligence (AI), early intervention protocols using robots is now ideally positioned to make an impact in this domain. There are numerous challenges though that still must be addressed to enable successful interaction between patients, clinicians, and robots - developing intelligence methods to enable personalized adaption to the needs of the child; ensuring equitable outcomes and mitigation of possible healthcare inequities that derive from the use of AI learning methods; and ensuring that the system can provide engaging and emotionally appropriate feedback to the user. In this presentation, I will discuss the role of robotics and AI for pediatric therapy and highlight our methods and preclinical studies that bring us closer to this goal. This talk provides a look at how robots and AI can change the texture of our day-to-day experiences through examples of research focused on robots interacting with humans, with an emphasis on healthcare robotics that can enable a healthier, less stressful, equality of life, now and in the future.
April 22, 2022 - Harry Asada, MIT, "Koopman Lifting Linearization for Global, Unified Representation of Hybrid Robot Systems: An Emerging Methodology for Legged Locomotion and Manipulation" (video)
The complexity of robot dynamics often pertains to the hybrid nature of dynamics, where governing dynamic equations are switched depending on the state of the system. Biped locomotion and legged robots, for example, are hybrid, due to the leg dynamics that are switched between open-loop and closed-loop kinematic chains depending on which leg(s) are in contact with the ground. Manipulation and assembly robots experience contact-noncontact discrete transitions, causing discontinuity in state transition and the degrees of freedom. The process can be simulated numerically, but its global dynamic behaviors are difficult to understand. In this talk, an emerging modeling methodology based on Koopman Operator theory will be presented as an alternative to traditional hybrid dynamics modeling. This theory provides us with a global linear model of complex nonlinear systems, and thereby allows us to apply the wealth of powerful linear control methods, including Model Predictive Control (MPC), which can be treated as a convex optimization problem. Furthermore, the method allows us to obtain a global, unified representation of hybrid dynamical systems. Multiple sets of nonlinear dynamic equations can be integrated into a single unified representation and guard functions and switching conditions can be woven into the single linear equation in a lifted space. First, the Koopman operator theory is briefly introduced, and it will be shown that the theory is applicable to hybrid systems with discontinuous state transitions. Second, the method is applied to hybrid robot systems in both locomotion and manipulation. The former includes passive dynamic walkers and human gait analyses. The latter includes multi-cable juggling and ring-to-shaft insertion. Finally, a critical open question in applying the Koopman operator to control systems will be discussed. When lifting a nonlinear system that is driven by exogenous inputs, causality may be violated if inputs are involved in construction of observables. Physical modeling theory based on Bond Graph will be used to analyze how inputs can reach observables through causal paths and how anti-causal state equations can be prevented from being formed. In summary, this talk will demystify the powerful Koopman theory and address how breakthroughs can be made in those challenging robotics problems from the dynamic modeling viewpoint.
April 15, 2022 - Katherine Kuchenbecker, Max Planck Institute for Intelligent Systems, "Tactile Sensing for Robots with Haptic Intelligence" (video)
The sense of touch plays a crucial role in the sensorimotor systems of humans and animals. In contrast, today's robotic systems rarely have any tactile sensing capabilities because artificial skin tends to be complex, bulky, rigid, delicate, unreliable, and/or expensive. To safely complete useful tasks in everyday human environments, robots should be able to feel contacts that occur across all of their body surfaces, not just at their fingertips. Furthermore, tactile sensors need to be soft to cushion contact and transmit loads, and their spatial and temporal resolutions should match the requirements of the task. We are thus working to create tactile sensors that provide useful contact information across different robot body parts. However, good tactile sensing is not enough: robots also need good social skills to work effectively with and around humans. I will elucidate these ideas by showcasing four systems we have created and evaluated in recent years: Insight, ERTac, HERA, and HuggieBot.
April 8, 2022 - Marco Hutter, ETH, "Robots in the Wild" (video)
In the recent years, we saw a tremendous progress in the field of legged robotics and the application of quadrupedal system in real world scenarios. Besides massive improvement of the hardware systems to rugged and certified products by a number of companies, recent developments in perception, navigation planning, and reinforcement learning for locomotion control have unleashed a new level of robot mobility and autonomy to operate in challenging terrain. In this presentation, I will talk about our work on control and autonomy for legged robots and other mobile machines. I will give insights into the underlying methodologies, present some of the most interesting findings, and talk about real-world deployments in the wild.
April 1, 2022 - Stefanos Nikolaidis, USC, "Towards Robust Human-Robot Interaction: A Quality Diversity Approach" (video)
The growth of scale and complexity of interactions between humans and robots highlights the need for new computational methods to automatically evaluate novel algorithms and applications. Exploring the diverse scenarios of interaction between humans and robots in simulation can improve understanding of complex human-robot interaction systems and avoid potentially costly failures in real-world settings.

In this talk, I propose formulating the problem of automatic scenario generation in human-robot interaction as a quality diversity problem, where the goal is not to find a single global optimum, but a diverse range of failure scenarios that explore both environments and human actions. I show how standard quality diversity algorithms can discover surprising and unexpected failure cases in the shared autonomy domain. I then discuss the development of a new class of quality diversity algorithms that significantly improve the search of the scenario space and the integration of these algorithms with generative models, which enables the generation of complex and realistic scenarios. Finally, I discuss applications in procedural content generation and human preference learning.

March 18, 2022 - Jeff Ichnowski, UC Berkeley, "Dynamic Robot Manipulation: Learned Optimization, Deformable Materials, and the Cloud" (video)
Robots in unstructured environments manipulate objects slowly and intermittently, relying on bursts of computation for planning. This is in stark contrast to humans who routinely use fast dynamic motions to manipulate and move objects or vault power cords over chairs when vacuuming. Dynamic motions can speed task completion, manipulate objects out of reach, and increase reliability, but they require: (1) integrating grasp planning, motion planning, and time-parameterization, (2) lifting quasi-static assumptions, and (3) intermittent access to powerful computing. I will describe how integrating grasp analysis into motion planning can speed up motions, how integrating deep-learning can speed up computation, and how integrating inertial and learned constraints can lift quasi-static assumptions to allow high-speed manipulation. I will also describe how cloud computing can provide on-demand access to immense computing to speed up motion planning and a new cloud-robotics framework that makes it easy.
March 11, 2022 - Pulkit Agrawal, MIT, "Coming of Age of Robot Learning" (video)
Why are today's robots slow and not agile? Why is dexterous manipulation difficult? Why can't we generalize from a few task demonstrations? These questions have intrigued robotics researchers for a long time. In this talk, I will explore answers and solutions to these questions via the following case studies:
  1. a quadruped robot that is substantially more agile than its counterparts (it runs, it spins) on challenging natural terrains.
  2. a simulated dexterous manipulation system capable of re-orienting novel objects.
  3. framework for learning task-sensitive perceptual representations for planning and out-of-distribution generalization.
We built these systems using model-free learning and deployed them in the real world. While a lot of recent progress in robotics is driven by perception, in our work, learned controllers have enabled us to address problems that were previously thought to be hard. I will discuss our findings, the insights we gained, and the road ahead.
 

2021 Speakers

December 17, 2021 - Greg Chirikjian, NUS, "Robot Imagination: Affordance-Based Reasoning about Unknown Objects"  (video)

Today’s robots are very brittle in their intelligence. This follows from a legacy of industrial robotics where robots pick and place known parts repetitively. For humanoid robots to function as servants in the home and in hospitals they will need to demonstrate higher intelligence, and must be able to function in ways that go beyond the stiff prescribed programming of their industrial counterparts. A new approach to service robotics is discussed here. The affordances of common objects such as chairs, cups, etc., are defined in advance. When a new object is encountered, it is scanned and a virtual version is put into a simulation wherein the robot ``imagines’’ how the object can be used. In this way, robots can reason about objects that they have not encountered before, and for which they have no training using. Videos of physical demonstrations will illustrate this paradigm, which the presenter has developed with his students Hongtao Wu, Meng Xin, Sipu Ruan, and others.

December 10, 2021 - David Held, CMU, "Perceptual Robot Learning" (video)

Robots today are typically confined to interact with rigid, opaque objects with known object models. However, the objects in our daily lives are often non-rigid, can be transparent or reflective, and are diverse in shape and appearance. One reason for the limitations of current methods is that computer vision and robot planning are often considered separate fields. I argue that, to enhance the capabilities of robots, we should design state representations that consider both the perception and planning algorithms needed for the robotics task. I will show how we can develop novel perception and planning algorithms to assist with the tasks of manipulating cloth, manipulating novel objects, and manipulating transparent and reflective objects. By thinking about the downstream task while jointly developing perception and planning algorithms, we can significantly improve our progress on difficult robots tasks.

December 3, 2021 - Frank Dellaert, Georgia Tech, "Factor Graphs for Perception and Action" (video)

Factor graphs have been very successful in providing a lingua franca in which to phrase robotics perception and navigation problems. In this talk I will re-visit some of those successes, also discussed in depth in [a recent review article]. However, I will focus on our more recent work in the talk, centered on using factor graphs for *action*. In particular, I will discuss our efforts in motion planning, trajectory optimization, optimal control, and model-predictive control, highlighting in each how factor graphs provide an intuitive and natural framework in which to think about these problems and generate state of the art solutions.

November 19, 2021 - Adriana Schultz, UW, "Robotics for the Next Manufacturing Revolution" (video)

3D printers are radically transforming the aerospace and automotive
industries. Whole-garment knitting machines allow the automated
production of complex apparel and shoes. Manufacturing electronics on
flexible substrates enable a new range of integrated products for
consumer electronics and medical diagnostics. These advances
demonstrate the potential for a new economy of on-demand production of
objects of unprecedented complexity and functionality.

In my talk, I argue that the next manufacturing revolution will impact
Robotics in fundamental ways and that there are exciting open problems
and challenges in this emerging research field. I will discuss how
novel manufacturing technology will transform what kinds of robots we
design and build and showcase opportunities for novel developments in
Robotics to advance this revolution in manufacturing. I will conclude
my talk by describing key insights from the field of computational
design and fabrication, which can be used to address some of these
opportunities.

November 5, 2021 - Katerina Fragkiadaki, CMU, "Modular 3D neural scene representations for visuomotor control and language grounding" (video)

Current state-of-the-art perception models localize rare object categories in images, yet often miss basic facts that a two-year-old has mastered: that objects have 3D extent, they persist over time despite changes in the camera view, they do not 3D intersect, and others. We will discuss models that learn to map 2D and 2.5D images and videos into amodal completed 3D feature maps of the scene and the objects in it by predicting views. We will show the proposed models learn object permanence, have objects emerge in 3D without human annotations, can ground language in 3D visual simulations, and learn intuitive physics and controllers that generalize across scene arrangements and camera configurations. In this way, the proposed world-centric scene representations overcome many limitations of image-centric representations for video understanding, model learning and language grounding.

October 29, 2021 - Katie Driggs-Campbell, UIUC, "Fantastic Failures and Where to Find Them: Considering Safety as a Function of Structure"  (video)

Autonomous systems and robots are becoming prevalent in our everyday lives and changing the foundations of our way of life (e.g., self-driving cars, agricultural robots, collaborative manufacturing). However, the desirable impacts of human-robot systems are only achievable if the underlying algorithms are robust to real-world conditions, can handle unexpected human behaviors, and are effective in (near) failure modes. This is often challenging in practice, as the scenarios in which general robots fail are often difficult to identify and characterize. In this talk, we’ll explore the notion of structure across high-impact application domains and consider how such structure impacts the system requirements and modes of human interaction. We will specifically consider how different contexts and tasks lend themselves to different mechanisms for safety assessment. Our approaches vary by context, ranging from highly structured, well-modeled systems where we can explicitly analyze time-of-first-failure; to black-box validation where we rely on efficient failure search in simulation; and to highly unstructured and uncertain settings where we must rely on anomaly detection to identify failures. We'll showcase our failures (and perhaps a few successes) on autonomous vehicles, crowd navigation, and agricultural robots in real-world settings.

 

October 22, 2021 - Dmitry Berenson, University of Michigan, "Learning Where to Trust Unreliable Dynamics Models for Motion Planning and Manipulation" (video)

The world outside our labs seldom conforms to the assumptions of our models. This is especially true for dynamics models used in control and motion planning for complex high-DOF systems like deformable objects. We must develop better models, but we must also accept that, no matter how powerful our simulators or how big our datasets, our models will sometimes be wrong. This talk will present our recent work on using unreliable dynamics models for motion planning and manipulation. Given a dynamics model, our methods learn where that model can be trusted given either batch data or online experience. These approaches allow imperfect dynamics models to be useful for a wide range of tasks in novel scenarios, while requiring much less data than baseline methods. This data-efficiency is a key requirement for scalable and flexible motion planning and manipulation capabilities.

October 8, 2021 - Igor Mordatch, Google Brain, "Reinforcement Learning via Sequence and Energy-Based Modeling" (video)

Can standard sequence modeling frameworks train effective policies for reinforcement learning (RL)? Doing so would allow drawing upon the simplicity and scalability of the Transformer architecture, and associated advances and infrastructure investments in language modeling such as GPT-x and BERT. I will present our work investigating this by casting the problem of RL as optimality-conditioned sequence modeling. Despite the simplicity, such an approach is surprisingly competitive with current model-free offline RL baselines. However, robustness of such an approach remains a challenge in robotics applications. In the second part of the talk, I will discuss the ways in which implicit, energy-based models can address it - particularly with respect to approximating complex, potentially discontinuous and multi-valued functions. Robots with such implicit policies can learn complex and remarkably subtle behaviors on contact-rich tasks from human demonstrations, including tasks with high combinatorial complexity and tasks requiring 1mm precision.

September 24, 2021 - Michael Posa, UPenn, "Contact-rich robotics: learning, impact-invariant control, and tactile feedback" (video)

Whether operating in a manufacturing plant or assisting within the home, many robotic tasks require safe and controlled interaction with a complex and changing world. However, state-of-the-art approaches to both learning and control are most effective when this interaction either occurs in highly structured settings or at slow speeds unsuitable for real-world deployment. In this talk, I will focus broadly on our most recent efforts to model and control complex, multi-contact motions. Even given a known model, current approaches to control typically only function if the contact mode can be determined or planned a priori. Our recent work has focused on real-time feedback policies using tactile sensing and ADMM-style algorithms to adaptively react to making and breaking contact or stick-slip transitions. For dynamic impacts, like robotic jumping, tactile sensing may not be practical; instead, I will show how impact-invariant strategies can both be robust to uncertainty during collisions while preserving control authority. In the second half of the talk, I will discuss how such models can be learned from data. While it might be appealing to jump first to standard tools from deep learning, the inductive biases inherent in such methods fundamentally clash with the non-differentiable physics of contact-rich robotics. I will discuss these challenges using both intuitive examples and empirical results. Finally, I will show how carefully reasoning about the role of discontinuity, and integrating implicit, non-smooth structures into the learning framework, can dramatically improve learning performance across an array of metrics. This approach, ContactNets, leverages bilevel optimization to successfully identify the dynamics of a six-sided cube bouncing, sliding, and rolling across a surface from only a handful of sample trajectories.

September 17, 2021 - Tomas Lozano-Perez, MIT. "Generalization in Planning and Learning for Robotic Manipulation" (video)

An enduring goal of AI and robotics has been to build a robot capable of robustly performing a wide variety of tasks in a wide variety of environments; not by sequentially being programmed (or taught) to perform one task in one environment at a time, but rather by intelligently choosing appropriate actions for whatever task and environment it is facing. This goal remains a challenge. In this talk I’ll describe recent work in our lab aimed at the goal of general-purpose robot manipulation by integrating task-and-motion planning with various forms of model learning. In particular, I’ll describe approaches to manipulating objects without prior shape models, to acquiring composable sensorimotor skills, and to exploiting past experience for more efficient planning.


To safeguard against COVID-19, and the spread of the 2019 novel coronavirus, the 2020 spring semester MIT Robotics Seminar Series was replaced with the "Robotics Today" online seminar series

Fall 2019 Campus-wide Robotics Seminar (sponsored by Amazon Robotics, Boston Dynamics, Berkshire Grey, Mitsubishi Electric Research Laboratories, and the Toyota Research Institute) (14:00 - 15:00 in 34-401)

December 13, Christian Rutz, University of St Andrews/Harvard15 years of studying tool-using New Caledonian crows: insights into human technological evolution, new electronic gadgets for observing wild birds, and some ideas for how to improve robots (video)

New Caledonian crows have attracted intense academic and media interest with their unusually sophisticated tool behaviour. They use at least three distinct tool types for extractive foraging, elaborately craft some of their tools from raw plant materials, and appear to refine their tool designs across generations, leading to ever-more complex technologies. In this talk, I will take stock of my group’s research on the species over the past 15 years, pursuing three complementary goals. First, I will illustrate how these remarkable birds provide an invaluable non-human perspective on the drivers and processes of technological evolution. Second, I will showcase how my team responded to the challenges of studying free-ranging crows in their forested habitats, by developing two innovative tracking technologies – crow-mounted miniature video cameras (to obtain a crow’s-eye view of their tool behaviour) and proximity loggers (to chart crows’ highly-dynamic social networks). Finally, I will explain how insights from tool-using crows may help improve the dexterity of human-made robots, and share some other blue-skies ideas I am currently exploring as a Radcliffe Fellow at Harvard University.

 

Bio: Christian Rutz is Professor of Biology at the University of St Andrews, Scotland, where he leads a research group studying animal tool behaviour. He combines observational, experimental and theoretical approaches, to address a major scientific puzzle: Why do so few animal species use tools, and how have humans become so technology-savvy? Rutz probes the evolutionary origins of tool behaviour with an innovative research strategy. Rather than studying our primate cousins, he investigates tropical crows that have the curious habit of using foraging tools. His principal study species, the renowned New Caledonian crow, fashions complex tool designs from a variety of plant materials, and may even refine its technology over time. Rutz recently discovered that the critically-endangered Hawaiian crow is also a skilled tool user, opening up exciting opportunities for comparative research. Rutz has pioneered cutting-edge wildlife tracking technologies, and currently serves as Founding President of the International Bio-Logging Society. He obtained his doctorate as a Rhodes Scholar from the University of Oxford, was subsequently awarded a prestigious David Phillips Research Fellowship, and held visiting appointments at the Universities of Oxford, Tokyo and New South Wales. His research is regularly published in leading interdisciplinary journals, including five papers to date in Nature and Science, has attracted a string of academic prizes and awards, and was showcased at major public science exhibitions. Rutz is currently a Fellow at the Radcliffe Institute for Advanced Study at Harvard University where he pursues a project examining the basic biological processes that allow rudimentary technologies to arise, advance and diversify.

 

December 6, Ken Goldberg, University of California, Berkeley The New Wave in Robot Grasping (video)

Despite 50 years of research, robots remain remarkably clumsy, limiting their reliability for warehouse order fulfillment, robot-assisted surgery, and home decluttering. The First Wave of grasping research is purely analytical, applying variations of screw theory to exact knowledge of pose, shape, and contact mechanics. The Second Wave is purely empirical: end-to-end hyperparametric function approximation (aka Deep Learning) based on human demonstrations or time-consuming self-exploration. A "New Wave" of research considers hybrid methods that combine analytic models with stochastic sampling and Deep Learning models. I'll present this history with new results from our lab on grasping diverse and previously-unknown objects.

 

Bio: Ken Goldberg is an artist, inventor, and roboticist. He is William S. Floyd Jr Distinguished Chair in Engineering at UC Berkeley and Chief Scientist at Ambidextrous Robotics. Ken is on the Editorial Board of the journal Science Robotics, served as Chair of the Industrial Engineering and Operations Research Department, and co-founded the IEEE Transactions on Automation Science and Engineering. Short documentary films he co-wrote were selected for Sundance and one was nominated for an Emmy Award. Ken and his students have published 300 peer-reviewed papers, 9 US patents, and created award-winning artworks featured in 70 exhibits worldwide.

 

Ken developed the first provably complete algorithm for part feeding and the first robot on the Internet. He was awarded the NSF PECASE (Presidential Faculty Fellowship) from President Bill Clinton in 1995, elected IEEE Fellow in 2005 and selected by the IEEE Robotics and Automation Society for the George Saridis Leadership Award in 2016. Ken founded UC Berkeley's Art, Technology, and Culture (ATC) public lecture series, serves on the Advisory Board of the RoboGlobal ETF, and has presented 500 invited lectures worldwide. He lives in the Bay Area and is madly in love with his wife, filmmaker and Webby Awards founder Tiffany Shlain, and their two daughters. (goldberg.berkeley.edu, @Ken_Goldberg, http://goldberg.berkeley.edu )

 

November 22, Abhinav Gupta, Carnegie Mellon University Towards Self-supervised Curious Robots (video)

In the last decade, we have made significant advances in the field of artificial intelligence thanks to supervised learning. But this passive supervision of our models has now become our biggest bottleneck. In this talk, I will discuss our efforts towards scaling up and empowering visual and robotic learning. First, I will show how amount of labeled data is crucial factor in learning. I will then describe how we can overcome the passive supervision bottleneck by self-supervised learning. Next, I will discuss how embodiment is crucial for learning -- our agents live in the physical world and need the ability to interact in the physical world. Towards this goal, I will finally present our efforts in large-scale learning of embodied agents in robotics. Finally, I will discuss how we can move from passive supervision to active exploration -- the ability of agents to create their own training data.

 

Bio: Abhinav Gupta is an Associate Professor at the Robotics Institute, Carnegie Mellon University and Research Manager at Facebook AI Research (FAIR). Abhinav's research focuses on scaling up learning by building self-supervised, lifelong and interactive learning systems. Specifically, he is interested in how self-supervised systems can effectively use data to learn visual representation, common sense and representation for actions in robots. Abhinav is a recipient of several awards including ONR Young Investigator Award, PAMI Young Researcher Award, Sloan Research Fellowship, Okawa Foundation Grant, Bosch Young Faculty Fellowship, YPO Fellowship, IJCAI Early Career Spotlight, ICRA Best Student Paper award, and the ECCV Best Paper Runner-up Award. His research has also been featured in Newsweek, BBC, Wall Street Journal, Wired and Slashdot.

 

November 1, Ben Recht, University of California, Berkeley Trying to Make Sense of Control from Pixels (video)

A prevalent motivation for merging machine learning and control is enabling decision systems to incorporate feedback from sensors like cameras, microphones, and other high dimensional sensing modalities. This talk will highlight some of the pressing research challenges impeding such a merger. Grounding the discussion in control of autonomous vehicles from vision alone, I will present a possible approach to designing robust controllers when the sensing modality is a learned from rich perceptual data. This proposal will combine first steps towards quantifying uncertainty in perception systems, designing robust controllers with such uncertainty in mind, and guaranteeing performance of these designs. I will close by attempting to illustrate the usefulness of these initial investigations in simulated and small-scale implementations of autonomous cars.

 

Bio: Benjamin Recht is an Associate Professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley. Ben's research group studies how to make machine learning systems more robust to interactions with a dynamic and uncertain world. Ben is the recipient of a Presidential Early Career Award for Scientists and Engineers, an Alfred P. Sloan Research Fellowship, the 2012 SIAM/MOS Lagrange Prize in Continuous Optimization, the 2014 Jamon Prize, the 2015 William O. Baker Award for Initiatives in Research, and the 2017 NIPS Test of Time Award.

 

October 25, Ani Majumdar, Princeton University Safety and Generalization Guarantees for Learning-Based Control of Robots (video)

Imagine an unmanned aerial vehicle that successfully navigates a thousand different obstacle environments or a robotic manipulator that successfully grasps a million objects in our dataset. How likely are these systems to succeed on a novel (i.e., previously unseen) environment or object? How can we learn control policies for robotic systems that provably generalize well to environments that our robot has not previously encountered? Unfortunately, current state-of-the-art approaches either do not generally provide such guarantees or do so only under very restrictive assumptions. This is a particularly pressing challenge for robotic systems with rich sensory inputs (e.g., vision) that employ neural network-based control policies.

 

In this talk, I will present approaches for learning control policies for robotic systems that provably generalize well with high probability to novel environments. The key technical idea behind our approach is to leverage tools from generalization theory (e.g., PAC-Bayes theory) and the theory of information bottlenecks. We apply our techniques on examples including navigation and grasping in order to demonstrate the potential to provide strong generalization guarantees on robotic systems with complicated (e.g., nonlinear) dynamics, rich sensory inputs (e.g., RGB-D), and neural network-based control policies.

 

Bio: Anirudha Majumdar is an Assistant Professor in the Mechanical and Aerospace Engineering (MAE) department at Princeton University. He also holds a position as a Visiting Researcher at the newly-established Google AI lab at Princeton. Majumdar received a Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology in 2016, and a B.S.E. in Mechanical Engineering and Mathematics from the University of Pennsylvania in 2011. Subsequently, he was a postdoctoral scholar at Stanford University from 2016 to 2017 at the Autonomous Systems Lab in the Aeronautics and Astronautics department. He is a recipient of the Paper of the Year Award from the International Journal of Robotics Research (IJRR), the Google Faculty Research Award, the Amazon Research Award, and the Best Conference Paper Award at the International Conference on Robotics and Automation (ICRA).

 

October 18, Karen Liu, Stanford University Simulating Realistic Human Motion for Robotics (video)

Creating realistic virtual humans has traditionally been considered a research problem in Computer Animation primarily for entertainment applications. With the recent breakthrough in collaborative robots and deep reinforcement learning, accurately modeling human movements and behaviors has become a common challenge also faced by researchers in robotics and artificial intelligence. In this talk, I will first discuss our recent work on developing efficient computational tools for simulating and controlling human movements. By learning a differentiable kinematic constraints from the real world human motion data, we enable existing multi-body physics engines to simulate more humanlike motion. In a similar vein, we learn task-agnostic boundary conditions and energy functions from anatomically realistic neuromuscular models, effectively defining a new action space better reflecting the physiological constraints of the human body. The second part of the talk will focus on two different yet highly relevant problems: how to teach robots to move like humans and how to teach robots to interact with humans. While Computer Animation research has shown that it is possible to teach a virtual human to mimic real athletes’ movements, the current techniques still struggle to reliably transfer a basic locomotion control policy to robot hardware in the real world. We developed a series of sim-to-real transfer methods to address the intertwined issue of system identification and policy learning for challenging locomotion tasks. Finally, I will showcase our effort on teaching robot to physically interact with humans in the scenarios of robot-assisted dressing and walking assistance.

 

Bio: C. Karen Liu is an associate professor in the Department of Computer Science at Stanford University. She received her Ph.D. degree in Computer Science from the University of Washington. Liu's research interests are in computer graphics and robotics, including physics-based animation, character animation, optimal control, reinforcement learning, and computational biomechanics. She developed computational approaches to modeling realistic and natural human movements, learning complex control policies for humanoids and assistive robots, and advancing fundamental numerical simulation and optimal control algorithms. The algorithms and software developed in her lab have fostered interdisciplinary collaboration with researchers in robotics, computer graphics, mechanical engineering, biomechanics, neuroscience, and biology. Liu received a National Science Foundation CAREER Award, an Alfred P. Sloan Fellowship, and was named Young Innovators Under 35 by Technology Review. In 2012, Liu received the ACM SIGGRAPH Significant New Researcher Award for her contribution in the field of computer graphics.

 

October 16 (@14:30), Steve Collins, Stanford University Designing exoskeletons and prosthetic limbs that enhance human locomotor performance (video)

Exoskeletons and active prosthetic limbs could improve mobility for tens of millions of people, but two serious challenges must first be overcome: we need ways of identifying what a device should do to benefit an individual user, and we need cheap, efficient hardware that can do it. In this talk, we will describe an approach to the design of wearable robots based on versatile emulator systems and algorithms that automatically customize assistance, which we call human-in-the-loop optimization. We will discuss recent successes of the approach, including large improvements to the energy economy and speed of walking and running through optimized exoskeleton assistance. We will also discuss the design of exoskeletons that use no energy themselves yet reduce the energy cost of human walking, and ultra-efficient electroadhesive actuators that could make wearable robots substantially cheaper and more efficient.

 

Bio: Steve Collins is an Associate Professor of Mechanical Engineering at Stanford University, where he teaches courses on design and robotics and directs the Stanford Biomechatronics Laboratory. He and his team develop wearable robotic devices to improve the efficiency, speed and balance of walking and running, especially for people with disabilities such as amputation or stroke. Their primary focus is to speed and systematize the design process itself by developing and using versatile prosthesis and exoskeleton emulator hardware and algorithms for human-in-the-loop optimization. They also develop efficient autonomous devices, such as energy-efficient walking robots, ultra-low-power electroadhesive clutches and unpowered exoskeletons that reduce the energy cost of walking. More information at: biomechatronics.stanford.edu.

 

October 4, Jessy Grizzle, University of Michigan Mathematics and Learning for Agile and Dynamic Bipedal Locomotion (video)

Is it great fortune or a curse to do legged robotics on a University campus that has Maya Lin’s earthen sculpture, The Wave Field? Come to the talk and find out! Our work on model-based feedback control for highly dynamic locomotion in bipedal robots will be amply illustrated through images, videos, and math. The core technical portion of the presentation is a method to overcome the obstructions imposed by high-dimensional bipedal models by embedding a stable walking motion in an attractive low-dimensional surface of the system's state space. The process begins with trajectory optimization to design an open-loop periodic walking motion of the high-dimensional model and then adding to this solution, a carefully selected set of additional open-loop trajectories of the model that steer toward the nominal motion. A drawback of trajectories is that they provide little information on how to respond to a disturbance. To address this shortcoming, Supervised Machine Learning is used to extract a low-dimensional state-variable realization of the open-loop trajectories. The periodic orbit is now an attractor of a low-dimensional state-variable model but is not attractive in the full-order system. We then use the special structure of mechanical models associated with bipedal robots to embed the low-dimensional model in the original model in such a manner that the desired walking motions are locally exponentially stable. When combined with robot vision, we hope this approach to control design will allow the full complexity of the Wave Field to be conquered. In any case, as Jovanotti points out, “Non c'è scommessa più persa di quella che non giocherò”. The speaker for one will keep trying!

 

Bio: Jessy W. Grizzle received the Ph.D. in electrical engineering from The University of Texas at Austin in 1983. He is currently a Professor of Electrical Engineering and Computer Science at the University of Michigan, where he holds the titles of the Elmer Gilbert Distinguished University Professor and the Jerry and Carol Levin Professor of Engineering. He jointly holds sixteen patents dealing with emissions reduction in passenger vehicles through improved control system design. Professor Grizzle is a Fellow of the IEEE and IFAC. He received the Paper of the Year Award from the IEEE Vehicular Technology Society in 1993, the George S. Axelby Award in 2002, the Control Systems Technology Award in 2003, the Bode Prize in 2012 and the IEEE Transactions on Control Systems Technology Outstanding Paper Award in 2014. His work on bipedal locomotion has been the object of numerous plenary lectures and has been featured on CNN, ESPN, Discovery Channel, The Economist, Wired Magazine, Discover Magazine, Scientific American and Popular Mechanics.

 

September 27, Radhika Nagpal, Harvard University Collective Intelligence, from Nature to Robots (video)

In nature, groups of thousands of individuals cooperate to create complex structure purely through local interactions -- from cells that form complex organisms, to social insects like termites that build meter-high mounds and army ants that self-assemble into bridges and nests, to the complex and mesmerizing motion of fish schools and bird flocks. What makes these systems so fascinating to scientists and engineers alike, is that even though each individual has limited ability, as a collective they achieve tremendous complexity.

 

What would it take to create our own artificial collectives of the scale and complexity that nature achieves? In this talk I will discuss four different ongoing projects that use inspiration from biological self-assembly to create robotic systems: The Kilobot Swarm, inspired by cells, the Termes robots, inspired by mound-building termites, the Eciton soft robots inspired by army ants, and the BlueSwarm project inspired by fish schools. There are many challenges for both building and programming robot swarms, and we use these systems to explore decentralized algorithms, embodied intelligence, and methods for synthesizing complex global behavior. Our theme is the same: can we create simple robots that cooperate to achieve collective complexity?

 

Bio: Radhika Nagpal is the Kavli Professor of Computer Science at Harvard University and a core faculty member of the Wyss Institute for Biologically Inspired Engineering. At Harvard, she leads the Self-organizing Systems Research Group (SSR) and her research interests span computer science, robotics, and biology. Her awards include the Microsoft New Faculty Fellowship (2005), NSF Career Award (2007), Borg Early Career Award (2010), Radcliffe Fellowship (2012), and the McDonald Mentoring Award (2015). Nagpal was named by Nature magazine as one of the top ten influential scientists and engineers of the year (Nature 10 award, Dec 2014). Nagpal is also the co-founder of an educational robotics company, ROOT Robotics (acquired by iRobot, 2019) and the author of a popular Scientific American blog article, “The Awesomest 7-year Postdoc”, about changing science culture.

 

September 20, Neville Hogan, MIT How do we do it? Studying human performance may inform robotics (video)

Humans are more dexterous than modern robots, yet we have slower actuators, communication and computation. We manage a mechanically complex body and routinely wield tools of even greater complexity. Understanding this accomplishment may point to superior robots. How do we do it? The key is our use of dynamic primitives, dynamic behaviors that emerge from neuro-mechanics without continuous central intervention. Classes include oscillations, sub movements and mechanical impedances, the latter to manage physical interaction. Combined in a generalization of Norton’s equivalent circuit, they enable a seamless approach to computation (signal processing) and mechanical physics (energy processing). Nonlinear mechanical impedances superimpose linearly. That enables simple ‘work-arounds’ for complex problems. Inverse kinematics may be eliminated, enabling redundancy management that is indifferent to ‘hard’ problems such as closed-chain kinematics and operation at singularities. Dynamic primitives also engender surprising limitations. Unimpaired humans cannot sustain the discreteness of rapid actions but ‘default’ to rhythmic performance. Conversely, rhythmicity of very slow oscillations cannot be sustained; instead performance ‘breaks down’ into a sequence of sub-movements. Moving an object with internal oscillatory dynamics (motivated by a cup of coffee) humans adjust hand mechanical impedance to improve predictability; in fact, predictability out-weighs effort. Predictability also underlies movement smoothness, which accounts for the widely-reported speed-curvature constraint of human movements. When a circular constraint eliminated curvature variation, an underlying speed-curvature relation reemerged in subjects’ force and motion fluctuations. When robot motion deviated from the biological speed-curvature pattern, subjects’ performance was compromised. Accommodating these ‘quirks’ of human performance may enable successful physical human-robot collaboration.

 

Bio: Neville Hogan is Sun Jae Professor of Mechanical Engineering and Professor of Brain and Cognitive Sciences at the Massachusetts Institute of Technology. He earned a Diploma in Engineering (with distinction) from Dublin Institute of Technology and M.S., Mechanical Engineer and Ph.D. degrees from MIT. He joined MIT’s faculty in 1979 and presently Directs the Newman Laboratory for Biomechanics and Human Rehabilitation. He co-founded Interactive Motion Technologies, now part of Bionik Laboratories. His research includes robotics, motor neuroscience, and rehabilitation engineering, emphasizing the control of physical contact and dynamic interaction. Awards include: Honorary Doctorates from Delft University of Technology and Dublin Institute of Technology; the Silver Medal of the Royal Academy of Medicine in Ireland; the Henry M. Paynter Outstanding Investigator Award and the Rufus T. Oldenburger Medal from the American Society of Mechanical Engineers, Dynamic Systems and Control Division; and the Academic Career Achievement Award from the Institute of Electrical and Electronics Engineers, Engineering in Medicine and Biology Society.

 

September 13, Matthew Johnson-Roberson, University of Michigan Lessons from the field: applying machine learning to self-driving cars and underwater robots without massive amounts of human labeling (video)

Mobile robots now deliver vast amounts of sensor data from large unstructured environments. In attempting to process and interpret this data there are many unique challenges in bridging the gap between prerecorded data sets and the field. This talk will present recent work addressing the application of deep learning techniques to robotic perception. We focus on solutions to several pervasive problems when attempting to deploy such techniques on fielded robotic systems. The themes of the talk revolve around alternatives to gathering and curating data sets for training. Are there ways of avoiding the labor-intensive human labeling required for supervised learning? These questions give rise to several lines of research based around self-supervision, adversarial learning, and simulation. We will show how these approaches applied to depth estimation, object classification, motion prediction, and domain transfer problems have great potential to change the way we train, test, and validate machine learning-based systems. Real examples from self-driving car and underwater vehicle deployments will be discussed.

 

Bio: Matthew Johnson-Roberson is Associate Professor of Engineering in the Department of Naval Architecture & Marine Engineering and the Department of Electrical Engineering and Computer Science at the University of Michigan. He received a PhD from the University of Sydney in 2010. He has held prior postdoctoral appointments with the Centre for Autonomous Systems - CAS at KTH Royal Institute of Technology in Stockholm and the Australian Centre for Field Robotics at the University of Sydney. He is a recipient of the NSF CAREER award (2015). He has worked in robotic perception since the first DARPA grand challenge and his group focuses on enabling robots to better see and understand their environment.

 

 

Spring 2019 Campus-wide Robotics Seminar (sponsored by Aurora Flight Sciences , The MathWorks , and the Russell Sage Foundation

 

May 17, Kris Hauser, Duke University Exploiting inter-problem structure in motion planning and control (video)

The ability for a robot to plan its own motions is a critical component of intelligent behavior, but it has so far proven challenging to calculate high-quality motions quickly and reliably. This limits the speed at which dynamic systems can react to changing sensor input, and makes systems less robust to uncertainty. Moreover, planning problems involving many sequential interrelated tasks, like walking on rough terrain or cleaning a kitchen, can take minutes or hours to solve. This talk will describe methods that exploit experience to solve motion planning and optimal control problems much faster than de novo methods. Unlike typical machine learning settings, the planning and optimal control setting introduces peculiar inter-problem (codimensional) similarity structures that must be exploited to obtain good generalization. This line of work has seen successful application in several domains over the years, including legged robots, dynamic vehicle navigation, multi-object manipulation, and workcell design.

 

Bio: Kris Hauser is an Associate Professor at Duke University with a joint appointment in the Electrical and Computer Engineering Department and the Mechanical Engineering and Materials Science Department. He received his PhD in Computer Science from Stanford University in 2008, bachelor's degrees in Computer Science and Mathematics from UC Berkeley in 2003, and worked as a postdoctoral fellow at UC Berkeley. He then joined the faculty at Indiana University from 2009-2014, moved to Duke in 2014, and will begin at University of Illinois Urbana-Champaign in 2019. He is a recipient of a Stanford Graduate Fellowship, Siebel Scholar Fellowship, Best Paper Award at IEEE Humanoids 2015, and an NSF CAREER award.

 

May 16, Michael Beetz, University of Bremen Digital Twin Knowledge Bases --- Knowledge Representation and Reasoning for Robotic Agents

Robotic agents that can accomplish manipulation tasks with the competence of humans have been one of the grand research challenges for artificial intelligence (AI) and robotics research for more than 50 years. However, while the fields made huge progress over the years, this ultimate goal is still out of reach. I believe that this is the case because the knowledge representation and reasoning methods that have been proposed in AI so far are necessary but too abstract. In this talk I propose to address this problem by endowing robots with the capability to internally emulate and simulate their perception-action loops based on realistic images and faithful physics simulations, which are made machine-understandable by casting them as virtual symbolic knowledge bases. These capabilities allow robots to generate huge collections of machine-understandable manipulation experiences, which robotic agents can generalize into commonsense and intuitive physics knowledge applicable to open varieties of manipulation tasks. The combination of learning, representation, and reasoning will equip robots with an understanding of the relation between their motions and the physical effects they cause at an unprecedented level of realism, depth, and breadth, and enable them to master human-scale manipulation tasks. This breakthrough will be achievable by combining leading-edge simulation and visual rendering technologies with mechanisms to semantically interpret and introspect internal simulation data structures and processes. Robots with such manipulation capabilities can help us to better deal with important societal, humanitarian, and economic challenges of our aging societies.

 

 

Bio: Michael Beetz is a professor for Computer Science at the Faculty for Mathematics & Informatics of the University Bremen and head of the Institute for Artificial Intelligence (IAI). He received his diploma degree in Computer Science with distinction from the University of Kaiserslautern. His MSc, MPhil, and PhD degrees were awarded by Yale University in 1993, 1994, and 1996, and his Venia Legendi from the University of Bonn in 2000. In February 2019 he received an Honorary Doctorate from Örebro University. He was vice-coordinator of the German cluster of excellence CoTeSys (Cognition for Technical Systems, 2006--2011), coordinator of the European FP7 integrating project RoboHow (web-enabled and experience-based cognitive robots that learn complex everyday manipulation tasks, 2012-2016), and is the coordinator of the German collaborative research centre EASE (Everyday Activity Science and Engineering, since 2017). His research interests include plan-based control of robotic agents, knowledge processing and representation for robots, integrated robot learning, and cognition-enabled perception.

 

 

May 10, Luca Carlone, MIT Certifiably-Robust Spatial Perception for Robots and Autonomous Vehicles (video)

Spatial perception is concerned with the estimation of a world model --that describes the state of the robot and the environment-- using sensor data and prior knowledge. As such, it includes a broad set of robotics and computer vision problems, ranging from object detection and pose estimation to robot localization and mapping. Most perception algorithms require extensive and application-dependent parameter tuning and often fail in off-nominal conditions (e.g., in the presence of large noise and outliers). While many applications can afford occasional failures (e.g., AR/VR, domestic robotics) or can structure the environment to simplify perception (e.g., industrial robotics), safety-critical applications of robotics in the wild, ranging from self-driving vehicles to search & rescue, demand a new generation of algorithms.
 

In this talk I present recent advances in the design of spatial perception algorithms that are robust to extreme amounts of outliers and afford performance guarantees. I first provide a negative result, showing that a general formulation of outlier rejection is inapproximable: in the worst case, it is impossible to design an algorithm (even “slightly slower” than polynomial time) that approximately finds the set of outliers. While it is impossible to guarantee that an algorithm will reject outliers in worst-case scenarios, our second contribution is to develop certifiably-robust spatial perception algorithms, that are able to assess their performance in every given problem instance. We consider two popular spatial perception problems: Simultaneous Localization And Mapping and 3D registration, and present efficient algorithms that are certifiably-robust to extreme amounts of outliers. As a result, we can solve registration problems where 99% of the measurements are outliers and succeed in localizing objects where an average human would fail.

 

Bio: Luca Carlone is the Charles Stark Draper Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). He received his PhD from the Polytechnic University of Turin in 2012. He joined LIDS as a postdoctoral associate (2015) and later as a Research Scientist (2016), after spending two years as a postdoctoral fellow at the Georgia Institute of Technology (2013-2015). His research interests include nonlinear estimation, numerical and distributed optimization, and probabilistic inference, applied to sensing, perception, and decision-making in single and multi-robot systems. He is a recipient of the 2017 Transactions on Robotics King-Sun Fu Memorial Best Paper Award, and the best paper award at WAFR 2016.

 

May 3, Hae Won Park, MIT Socio-Emotive Intelligence for Long-term Robot Companions: Why and How? (video)

In this talk, I’d like to engage the Robotics @ MIT community to question whether robots need socio-emotive intelligence. To answer this question though, we need to first think about a new dimension of evaluating AI algorithms and systems that we build - measuring their impact on people’s lives in the real-world contexts. I will highlight a number of provocative research findings from our recent long-term deployment of social robots in schools, homes, and older adult living communities. We employ an affective reinforcement learning approach to personalize robot’s actions to modulate each user’s engagement and maximize the interaction benefit. The robot observes users’ verbal and nonverbal affective cues to understand the user state and to receive feedback on its actions. Our results show that the interaction with a robot companion influences users’ beliefs, learning, and how they interact with others. The affective personalization boosts these effects and helps sustain long-term engagement. During our deployment studies, we observed that people treat and interact with artificial agents as social partners and catalysts. We also learned that the effect of the interaction strongly correlates to the social relational bonding the user has built with the robot. So, to answer the question “does a robot need socio-emotive intelligence,” I argue that we should only draw conclusions based on what impact it has on the people living with it - is it helping us flourish in the direction that we want to thrive?

 

Bio: Hae Won Park is a Research Scientist at MIT Media Lab and a Principal Investigator of the Social Robot Companions Program. Her research focuses on socio-emotive AI and personalization of social robots that support long-term interaction and relationship between users and their robot companions. Her work spans a range of applications including education for young children and wellbeing benefits for older adults. Her research has been published at top robotics and AI venues and has received awards for best paper (HRI 2017), innovative robot applications (ICRA 2013), and pecha-kucha presentation (ICRA 2014). Hae Won received her PhD from Georgia Tech in 2014, at which time she also co-founded Zyrobotics, an assistive education robotics startup that was recognized as the best 2015 US robotics startup by Robohub and was the finalist of the Intel Innovation Award.

 

April 26,  Sethu Vijayakumar, University of Edinburgh Shared Autonomy for Robots in Dynamic Environments: Advances in Learning Control and Representations (video)

The next generation of robots are going to work much more closely with humans, other robots and interact significantly with the environment around it. As a result, the key paradigms are shifting from isolated decision making systems to one that involves shared control -- with significant autonomy devolved to the robot platform; and end-users in the loop making only high level decisions.

 

This talk will briefly introduce powerful machine learning technologies ranging from robust multi-modal sensing, shared representations, scalable real-time learning and adaptation and compliant actuation that are enabling us to reap the benefits of increased autonomy while still feeling securely in control. This also raises some fundamental questions: while the robots are ready to share control, what is the optimal trade-off between autonomy and control that we are comfortable with?

 

Domains where this debate is relevant include unmanned space exploration, self-driving cars, offshore asset inspection & maintenance, deep sea & autonomous mining, shared manufacturing, exoskeletons/prosthetics for rehabilitation as well as smart cities to list a few.

 

Bio: Sethu Vijayakumar is the Professor of Robotics in the School of Informatics at the University of Edinburgh and the Director of the Edinburgh Centre for Robotics. He holds the prestigious Senior Research Fellowship of the Royal Academy of Engineering, co-funded by Microsoft Research and is also an Adjunct Faculty of the University of Southern California (USC), Los Angeles. Professor Vijayakumar, who has a PhD (1998) from the Tokyo Institute of Technology, is world renowned for development of large scale machine learning techniques in the real time control of several iconic large degree of freedom anthropomorphic robotic systems including the SARCOS and the HONDA ASIMO humanoid robots, KUKA-DLR robot arm and iLIMB prosthetic hand. His latest project (2016) involves a collaboration with NASA Johnson Space Centre on the Valkyrie humanoid robot being prepared for unmanned robotic pre-deployment missions to Mars. He is the author of over 180 highly cited publications in robotics and machine learning and the winner of the IEEE Vincent Bendix award, the Japanese Monbusho fellowship, 2013 IEEE Transaction on Robotics Best Paper Award and several other paper awards from leading conferences and journals. He has led several UK, EU and international projects in the field of Robotics, attracted funding of over £38M in research grants over the last 8 years and has been appointed to grant review panels for the DFG-Germany, NSF-USA and the EU. He is a Fellow of the Royal Society of Edinburgh and a keen science communicator with a significant annual outreach agenda. He is the recipient of the 2015 Tam Dalyell Award for excellence in engaging the public with science and serves as a judge on BBC Robot Wars and was involved with the UK wide launch of the BBC micro:bit initiative for STEM education. Since September 2018, he has taken on the role of the Programme co-Director of The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, driving their Robotics and Autonomous Systems agenda.

 

April 19, Ross Knepper, Cornell University Formalizing Teamwork in Human-Robot Interaction (video)

Robots out in the world today work for people but not with people. Before robots can work closely with ordinary people as part of a human-robot team in a home or office setting, robots need the ability to acquire a new mix of functional and social skills. Working with people requires a shared understanding of the task, capabilities, intentions, and background knowledge. For robots to act jointly as part of a team with people, they must engage in collaborative planning, which involves forming a consensus through an exchange of information about goals, capabilities, and partial plans. Often, much of this information is conveyed through implicit communication.

 

In this talk, I formalize components of teamwork involving collaboration, communication, and representation. I illustrate how these concepts interact in the application of social navigation, which I argue is a first-class example of teamwork. In this setting, participants must avoid collision by legibly conveying intended passing sides via nonverbal cues like path shape. A topological representation using the braid groups enables the robot to reason about a small enumerable set of passing outcomes. I show how implicit communication of topological group plans achieves rapid convergence to a group consensus, and how a robot in the group can deliberately influence the ultimate outcome to maximize joint performance, yielding pedestrian comfort with the robot.

 

Bio: Ross A. Knepper is an Assistant Professor in the Department of Computer Science at Cornell University, where he directs the Robotic Personal Assistants Lab. His research focuses on the theory and algorithms of human-robot interaction in collaborative work. He builds systems to perform complex tasks where partnering a human and robot together is advantageous for both, such as factory assembly or home chores. Knepper has built robot systems that can assemble Ikea furniture, ask for help when something goes wrong, interpret informal speech and gesture commands, and navigate in a socially-competent manner among people. Before Cornell, Knepper was a Research Scientist at MIT. He received his Ph.D. in Robotics from Carnegie Mellon University in 2011.

 

April 12, Angela Schoellig, University of Toronto Machine Learning for Robotics: Safety and Performance Guarantees for Learning-Based Control (video)

The ultimate promise of robotics is to design devices that can physically interact with the world. To date, robots have been primarily deployed in highly structured and predictable environments. However, we envision the next generation of robots (ranging from self-driving and -flying vehicles to robot assistants) to operate in unpredictable and generally unknown environments alongside humans. This challenges current robot algorithms, which have been largely based on a-priori knowledge about the system and its environment. While research has shown that robots are able to learn new skills from experience and adapt to unknown situations, these results have been limited to learning single tasks, and demonstrated in simulation or lab settings. The next challenge is to enable robot learning in real-world application scenarios. This will require versatile, data-efficient and online learning algorithms that guarantee safety when placed in a closed-loop system architecture. It will also require to answer the fundamental question of how to design learning architectures for dynamic and interactive agents. This talk will highlight our recent progress in combining learning methods with formal results from control theory. By combining models with data, our algorithms  achieve adaptation to changing conditions during long-term operation, data-efficient multi-robot, multi-task transfer learning, and safe reinforcement learning. We demonstrate our algorithms in vision-based off-road driving and drone flight experiments, as well as on mobile manipulators.

 

Bio: Angela Schoellig is an Assistant Professor at the University of Toronto Institute for Aerospace Studies and an Associate Director of the Centre for Aerial Robotics Research and Education. She holds a Canada Research Chair in Machine Learning for Robotics and Control, is a principal investigator of the NSERC Canadian Robotics Network, and is a Faculty Affiliate of the Vector Institute for Artificial Intelligence. She conducts research at the intersection of robotics, controls, and machine learning. Her goal is to enhance the performance, safety, and autonomy of robots by enabling them to learn from past experiments and from each other. She is a recipient of a Sloan Research Fellowship (2017), an Ontario Early Researcher Award (2017), and a Connaught New Researcher Award (2015). She is one of MIT Technology Review’s Innovators Under 35 (2017), a Canada Science Leadership Program Fellow (2014), and one of Robohub’s “25 women in robotics you need to know about (2013)”. Her team won the 2018 North-American SAE AutoDrive Challenge sponsored by General Motors. Her PhD at ETH Zurich (2013) was awarded the ETH Medal and the Dimitris N. Chorafas Foundation Award. She holds both an M.Sc. in Engineering Cybernetics from the University of Stuttgart (2008) and an M.Sc. in Engineering Science and Mechanics from the Georgia Institute of Technology (2007). More information can be found at: www.schoellig.name.

 

April 5, Matthew Mason, Carnegie Mellon University / Berkshire Grey Models of Robotic Manipulation (video)

Some of my earliest work focused on robot grasping: on decomposing the grasping process into phases, examining the contacts and motions occurring in each phase, and developing conditions under which a stable grasp might be obtained while addressing initial pose uncertainty. That was the start of my interest in the physics of manipulation, which has continued until now. The early work was inspired partly by robot programming experiences, especially applications in manufacturing automation. It is possible that materials handling is going to play a similar role, as the primary application of research in autonomous manipulation, changing our perceptions of the challenges and opportunities our field.

 

Bio: Matt Mason earned his PhD in Computer Science and Artificial Intelligence from MIT in 1982. He has worked in robotics for over forty years. For most of that time Matt has been a Professor of Robotics and Computer Science at Carnegie Mellon University (CMU). He was Director of the Robotics Institute from 2004 to 2014.

 

Since 2014, Matt has split his time between CMU and Berkshire Grey, where he is the Chief Scientist. Berkshire Grey is a Boston-based company that produces innovative materials-handling solutions for eCommerce and logistics.

 

Matt is a Fellow of the AAAI and the IEEE. He won the IEEE R&A Pioneer Award, and the IEEE Technical Field Award in Robotics and Automation.

 

March 22, Dimitra Panagou, University of Michigan Safety and Resilience in Multi-Agent Systems: Theory and Algorithms for Adversarially-Robust Multi-Robot Teams and Human-Robot Collaboration (video)

Planning, decision-making and control for uncertain multi-agent systems has been a popular topic of research with numerous applications, e.g., in robotic networks operating in dynamic, unknown, or even adversarial environments, within or without the presence of humans. Despite significant progress over the years, challenges such as constraints (in terms of state and time specifications), malicious or faulty information, environmental uncertainty and scalability are typically not treated well enough with existing methods. In the first part of this talk, I will present some of our recent results and ongoing work on safety and resilience of multi-agent systems in the presence of adversaries. I will discuss (i) our approach on achieving safe, resilient consensus in the presence of malicious information and its application to resilient leader-follower robot teams under bounded inputs, and (ii) our method on safe multi-agent motion planning and de-confliction using finite-time controllers and estimators in the presence of bounded uncertainty. In the second part of the talk, I will present (iii) our results on human-robot collaboration that involve the unsupervised, on-the-fly learning of assistive information (camera views) by teams of co-robots in human multi-tasking environments.

 

Bio: Dimitra Panagou received the Diploma and PhD degrees in Mechanical Engineering from the National Technical University of Athens, Greece, in 2006 and 2012, respectively. Since September 2014 she has been an Assistant Professor with the Department of Aerospace Engineering, University of Michigan. Prior to joining the University of Michigan, she was a postdoctoral research associate with the Coordinated Science Laboratory, University of Illinois, Urbana-Champaign (2012-2014), a visiting research scholar with the GRASP Lab, University of Pennsylvania (June 2013, fall 2010) and a visiting research scholar with the University of Delaware, Mechanical Engineering Department (spring 2009).

 

Dr. Panagou's research program emphasizes in the exploration, development, and implementation of control and estimation methods in order to address real-world problems via provably correct solutions. Her research spans the areas of nonlinear systems and control; control of multi-agent systems and networks; distributed systems and control; motion and path planning; switched and hybrid systems; constrained decision-making and control; navigation, guidance, and control of aerospace vehicles. She is particularly interested in the development of provably correct methods for the robustly safe and secure (resilient) operation of autonomous systems in complex missions, with applications in unmanned aerial systems, robot/sensor networks and multi-vehicle systems (ground, marine, aerial, space). Dr. Panagou is a recipient of a NASA Early Career Faculty Award, of an AFOSR Young Investigator Award, and a member of the IEEE and the AIAA. More details: http://www-personal.umich.edu/~dpanagou/research/index.html

March 8, Dieter Fox, University of Washington / NVIDIA Toward robust manipulation in complex scenarios (video)

Over the last years, advances in deep learning and GPU based computing have enabled significant progress in several areas of robotics, including visual recognition, real-time tracking, object manipulation, and learning-based control. This progress has turned applications such as autonomous driving and delivery tasks in warehouses, hospitals, or hotels into realistic application scenarios. However, robust manipulation in complex settings is still an open research problem. Various research efforts show promising results on individual pieces of the manipulation puzzle, including manipulator control, touch sensing, object pose detection, task and motion planning, and object pickup. In this talk, I will present our recent efforts in integrating such components into a complete manipulation system. Specifically, I will describe a mobile robot manipulator that moves through a kitchen, can open and close cabinet doors and drawers, detect and pickup objects, and move these objects to desired locations. Our baseline system is designed to be applicable in a wide variety of environments, only relying on 3D articulated models of the kitchen and the relevant objects. I will discuss the design choices behind our approach, the lessons we learned so far, and various research directions toward enabling more robust and general manipulation systems.

 

Bio: Dieter Fox is Senior Director of Robotics Research at NVIDIA. He is also a Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. Dieter obtained his Ph.D. from the University of Bonn, Germany. His research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to problems such as mapping, object detection and tracking, manipulation, and activity recognition. He has published more than 200 technical papers and is the co-author of the textbook “Probabilistic Robotics”. He is a Fellow of the IEEE and the AAAI, and he received several best paper awards at major robotics, AI, and computer vision conferences. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference.

 

Fall 2018 Campus-wide Robotics Seminar (sponsored by Aurora Flight Sciences , The MathWorks , and the Russell Sage Foundation )

 

December 4, Kevin Lynch, Northwestern University Motion Planning and Control for Robot and Human Manipulation (video)

In this talk I will describe our progress on motion planning and control for two very different manipulation problems: (1) dexterous manipulation by robots and (2) control of arm neuroprosthetics for humans with spinal cord injuries.

 

The first part of the talk will focus on manipulation modes commonly used by humans but mostly avoided by robots, such as rolling, sliding, pushing, pivoting, tapping, and in-hand manipulation. These manipulation modes exploit controlled motion of the object relative to the manipulator to increase dexterity.

 

In the second part of the talk I will describe control of a functional electrical stimulation neuroprosthetic for the human arm. The goal of the project is to allow people with high spinal cord injury to recover the use of their arms for activities of daily living. Beginning with traditional methods for system identification and control of robot arms, I will describe how we have adapted the approach to identification and control of an electrically stimulated human arm.

 

Bio: Kevin Lynch is Professor and Chair of the Mechanical Engineering Department at Northwestern University. He is a member of the Neuroscience and Robotics Lab (nxr.northwestern.edu) and the Northwestern Institute on Complex Systems (nico.northwestern.edu). His research focuses on dynamics, motion planning, and control for robot manipulation and locomotion; self-organizing multi-agent systems; and functional electrical stimulation for restoration of human function. Dr. Lynch is Editor-in-Chief of the IEEE Transactions on Robotics. He is co-author of the textbooks "Modern Robotics: Mechanics, Planning, and Control" (Cambridge University Press, 2017, http://modernrobotics.org), "Embedded Computing and Mechatronics" (Elsevier, 2015, http://nu32.org), and "Principles of Robot Motion" (MIT Press, 2005). He is the recipient of Northwestern's Professorship of Teaching Excellence and the Northwestern Teacher of the Year award in engineering. He earned a BSE in electrical engineering from Princeton University and a PhD in robotics from Carnegie Mellon University.

 

November 27, Christoph Keplinger, University of Colorado HASEL Artificial Muscles—Versatile High-Performance Actuators for a New Generation of Life-like Robots (video, CSAIL only)

Robots today rely on rigid components and electric motors based on metal and magnets, making them heavy, unsafe near humans, expensive and ill-suited for unpredictable environments. Nature, in contrast, makes extensive use of soft materials and has produced organisms that drastically outperform robots in terms of agility, dexterity, and adaptability. The Keplinger Lab aims to fundamentally challenge current limitations of robotic hardware, using an interdisciplinary approach that synergizes concepts from soft matter physics and chemistry with advanced engineering technologies to introduce intelligent materials systems for a new generation of life-like robots. One major theme of research is the development of new classes of actuators – a key component of all robotic systems – that replicate the sweeping success of biological muscle, a masterpiece of evolution featuring astonishing all-around actuation performance, the ability to self-heal after damage, and seamless integration with sensing.

 

This talk is focused on the labs' recently introduced HASEL artificial muscle technology. Hydraulically Amplified Self-healing ELectrostatic (HASEL) transducers are a new class of self-sensing, high-performance muscle-mimetic actuators, which are electrically driven and harness a mechanism that couples electrostatic and hydraulic forces to achieve a wide variety of actuation modes. Current designs of HASEL are capable of exceeding actuation stress of 0.3 MPa, linear strain of 100%, specific power of 600W/kg, full-cycle electromechanical efficiency of 30% and bandwidth of over 100Hz; all these metrics match or exceed the capabilities of biological muscle. Additionally, HASEL actuators can repeatedly and autonomously self-heal after electric breakdown, thereby enabling robust performance. Further, this talk introduces a facile fabrication technique that uses an inexpensive CNC heat sealing device to rapidly prototype HASELs. New designs of HASEL incorporate mechanisms to greatly reduce operating voltages, enabling the use of lightweight and portable electronics packages to drive untethered soft robotic devices powered by HASELs. Modeling results predict the impact of material parameters and scaling laws of these actuators, laying out a roadmap towards future HASEL actuators with drastically improved performance. These results highlight opportunities to further develop HASEL artificial muscles for wide use in next-generation robots that replicate the vast capabilities of biological systems.

 

Bio: Christoph Keplinger is an Assistant Professor of Mechanical Engineering and a Fellow of the Materials Science and Engineering Program at the University of Colorado Boulder, where he also holds an endowed appointment serving as Mollenkopf Faculty Fellow. Building upon his background in soft matter physics (PhD, JKU Linz), mechanics and chemistry (Postdoc, Harvard University), he leads a highly interdisciplinary research group at Boulder, with a current focus on (I) soft, muscle-mimetic actuators and sensors, (II) energy harvesting and (III) functional polymers. His work has been published in top journals including Science, Science Robotics, PNAS, Advanced Materials and Nature Chemistry, as well as highlighted in popular outlets such as National Geographic. He has received prestigious US awards such as a 2017 Packard Fellowship for Science and Engineering, and international awards such as the 2013 EAPromising European Researcher Award from the European Scientific Network for Artificial Muscles. He is the principal inventor of HASEL artificial muscles, a new technology that will help enable a next generation of life-like robotic hardware; in 2018 he co-founded Artimus Robotics to commercialize the HASEL technology.

 

November 20, Julie Walker, Stanford University Handheld Kinesthetic Devices and Reinforcement Learning for Haptic Guidance (video)

Screens, headphones, and now virtual and augmented reality headsets can provide people with instructions and guidance through visual and auditory feedback. Yet those senses are often overloaded, motivating the display of information through the sense of touch. Haptic devices, which display forces, vibrations, or other touch cues to a user’s hands or body, can be private, intuitive, and leave the other senses free. In this talk, I will discuss several novel hand-held haptic devices that provide clear, directional, kinesthetic cues while allowing the user to move through a large workspace. Using these devices, we study the anisotropies and variability in human touch perception and movement. Using modeling and reinforcement learning techniques, haptic devices can adapt to the user’s responses and provide effective guidance and intuitive touch interactions. These devices have applications in medical guidance and training, navigation, sports, and entertainment. Such holdable devices could enable haptic interfaces to become as prevalent and impactful in our daily lives as visual or audio interfaces.

 

Bio: Julie Walker is a Ph.D. Candidate in Mechanical Engineering at Stanford University. She is a member of the Collaborative Haptics and Robotics in Medicine Lab, led by Professor Allison Okamura. She received a masters degree from Stanford University and a bachelors degree in Mechanical Engineering at Rice University. She has worked in haptics and human robot-interaction research since 2012, studying haptic feedback for prosthetic hands, robotic surgery, and teleoperation. Her Ph.D. thesis work focuses on haptic guidance through novel handheld devices, particularly for medical applications. She has received an NSF Graduate Research Fellowship and a Chateaubriand Fellowship.

 

November 13, Noah Jafferis, Harvard University Insect-scale mechanisms: from flying robots to piezoelectric fans (video - CSAIL only)

In recent years, there has been heightened interest in developing sub-gram hovering vehicles, in part for their predicted high maneuverability (based on the relative scaling of torques and inertias). In this regime, the efficiency of electromagnetic motors drops substantially, and piezoelectrics are generally the actuator of choice. These typically operate in an oscillatory mode, which is well matched with flapping wings. However, at such a small size, integrating on-board power and electronics is quite challenging (particularly given the high voltages required for piezoelectrics), and such vehicles have thus been limited to fly tethered to an off-board power supply and control system. In this talk, I will discuss recent advances in the Harvard Robobee to overcome these challenges, including non-linear resonance modeling, improved manufacturing, and multi-wing designs.

 

I will also discuss fabrication of an alternative mechanism for converting piezoelectric vibration to airflow. This is of interest as a low-profile fan for CPU cooling, a growing issue as electronic devices pack increasing power consumption (and thus heat) into smaller spaces. Additionally, a thruster based on this technology could achieve higher thrust-per-area and speed than flapping wings or propellers (at the expense of efficiency). Its extremely modular nature is also attractive in such an application.

 

When we operate robots near resonance, particularly with very non-linear systems and/or multiple mechanically interacting actuators, control can be extremely challenging. In these scenarios, knowledge of the instantaneous deflections or velocities of each actuator is crucial. Toward this end, I will describe our work on monitoring the actuators’ current to obtain accurate velocity data regardless of external loading, without the need for any additional sensors.

 

Bio: Noah T. Jafferis obtained his PhD in the Electrical Engineering Department at Princeton University in 2012, and is currently a Postdoctoral Research Associate in Harvard University's Microrobotics Lab. Noah was home-schooled until entering Yale University at the age of 16, where he received his B.S. in Electrical Engineering in 2005. At Princeton, Noah's research included printing silicon from nanoparticle suspensions and the development of a "flying carpet" (traveling wave based propulsion of a thin plastic sheet). His current research at Harvard includes nonlinear resonance modeling, scaling, and system optimization for flapping-wing vehicles; piezoelectric actuators and motors (manufacturing and modeling for optimal power density, efficiency, and lifetime); a fan/thruster using piezoelectrically actuated peristaltic pumping; solar power for autonomous operation of insect-scale robots; and self-sensing actuation. Some of his many research interests include micro/nano-robotics, bio-inspired engineering, 3D integrated circuits, MEMS/NEMS, piezoelectrics, 3D printing, energy harvesting, and large-area/flexible electronics.

 

November 6, Leslie Kaelbling, MIT Doing for our robots what evolution did for us (video)

We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in "the factory" (that is, at engineering time) and in "the wild" (that is, when the robot is delivered to a customer). I will share some general thoughts about the strategies for robot design and then talk in detail about some work I have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.

 

Joint work with: Tomas Lozano-Perez, Zi Wang, Caelan Garrett and a fearless group of summer robot students

 

Bio: Leslie is a Professor at MIT. She has an undergraduate degree in Philosophy and a PhD in Computer Science from Stanford, and was previously on the faculty at Brown University. She was the founding editor-in-chief of the Journal of Machine Learning Research. Her research agenda is to make intelligent robots using methods including estimation, learning, planning, and reasoning. She is not a robot.

 

October 30, Thomas Funkhouser, Princeton/Google 3D Scene Understanding with a RGB-D Camera (video)

Three-dimensional scene understanding is important for computer systems that respond to and/or interact with the physical world, such as robotic manipulation and autonomous navigatio. For example, they may need to estimate the 3D geometry of the surrounding space (e.g., in order to navigate without collisions) and/or to recognize the semantic categories of nearby objects (e.g., in order to interact with them appropriately). In this talk, I will describe recent work on 3D scene understanding by the 3D Vision Group at Princeton University. I will focus on three projects that infer 3D structural and semantic models of scenes from partial observations with a RGB-D camera. The first learns to infer depth (D) from color (RGB) in regions where the depth sensor provides no return (e.g., because surfaces are shiny or far away). The second learns to predict the 3D structure and semantics within volumes of space occluded from view (e.g., behind a table). The third learns to infer the 3D structure and semantics of the entire surrounding environment (i.e., inferring an annotated 360 degree panorama from a single image). For each project, I will discuss the problem formulation, scene representation, network architecture, dataset curation, and potential applications.

 

This is joint work with Angel X. Chang, Angela Dai, Kyle Genova, Maciej Halber, Matthias Niessner, Shuran Song, Fisher Yu, Andy Zeng, and Yinda Zhang.

 

Bio: Thomas Funkhouser is the David M. Siegel Professor of Computer Science at Princeton University. He received a PhD in computer science from UC Berkeley in 1993 and was a member of the technical staff at Bell Labs until 1997 before joining the faculty at Princeton. For most of his career, he focused on research problems in computer graphics, including foundational work on 3D shape retrieval, analysis, and modeling. His most recent research has focused on 3D scene understanding in computer vision and robotics. He has published more than 100 research papers and received several awards, including a ACM SIGGRAPH Computer Graphics Achievement Award, ACM SIGGRAPH Academy membership, NSF Career Award, Sloan Foundation Fellowship, Emerson Electric, E. Lawrence Keyes Faculty Advancement Award, and University Council Excellence in Teaching Awards.

 

October 23, Byron Boots, Georgia Tech Machine Learning for Robot Perception, Planning, and Control (video)

The main goal of this talk is to illustrate how machine learning can start to address some of the fundamental perceptual and control challenges involved in building intelligent robots. I’ll discuss how to learn dynamics models for planning and control, how to use imitation to efficiently learn deep policies directly from sensor data, and how policies can be parameterized with task-relevant structure. I’ll show how some of these ideas have been applied to a new high speed autonomous “AutoRally” platform built at Georgia Tech and an off-road racing task that requires impressive sensing, speed, and agility to complete. Along the way, I’ll show how theoretical insights from reinforcement learning, imitation learning, and online learning help us to overcome practical challenges involved in learning on real-world platforms. I will conclude by discussing ongoing work in my lab related to machine learning for robotics.

 

Bio: Byron Boots is an Assistant Professor of Interactive Computing in the College of Computing at the Georgia Institute of Technology. He concurrently holds an adjunct appointment in the School of Electrical and Computer Engineering at Georgia Tech and a Visiting Faculty appointment at Nvidia Research. Byron received his M.S. and Ph.D. in Machine Learning from Carnegie Mellon University and was a postdoctoral scholar in Computer Science and Engineering at the University of Washington. He joined Georgia Tech in 2014, where he founded the Georgia Tech Robot Learning Lab, affiliated with the Center for Machine Learning and the Institute for Robotics and Intelligent Machines. Byron is the recipient of several awards including Best Paper at ICML, Best Paper at AISTATS, Best Paper Finalist at ICRA, Best Systems Paper Finalist at RSS, and the NSF CAREER award. His main research interests are in theory and systems that tightly integrate perception, learning, and control.

 

October 16, Rob Wood, Harvard University The Mechanical Side of Artificial Intelligence (video - no audio)

Artificial Intelligence typically focuses on perception, learning, and control methods to enable autonomous robots to make and act on decisions in real environments. On the contrary, our research is focused on the design, mechanics, materials, and manufacturing of novel robot platforms that make the perception, control, or action easier or more robust for natural, unstructured, and often unpredictable environments. Key principles in this pursuit include bioinspired designs, smart materials for novel sensors and actuators, and the development of multi-scale, multi-material manufacturing methods. This talk will illustrate this philosophy by highlighting the creation of two unique classes of robots: soft-bodied autonomous robots and highly agile aerial and terrestrial robotic insects.

 

Bio: Robert Wood is the Charles River Professor of Engineering and Applied Sciences in the Harvard John A. Paulson School of Engineering and Applied Sciences, a founding core faculty member of the Wyss Institute for Biologically Inspired Engineering and a National Geographic Explorer. Prof. Wood completed his M.S. and Ph.D. degrees in the Dept. of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He is the winner of multiple awards for his work including the DARPA Young Faculty Award, NSF Career Award, ONR Young Investigator Award, Air Force Young Investigator Award, Technology Review's TR35, and multiple best paper awards. In 2010 Wood received the Presidential Early Career Award for Scientists and Engineers from President Obama for his work in microrobotics. In 2012 he was selected for the Alan T. Waterman award, the National Science Foundation's most prestigious early career award. In 2014 he was named one of National Geographic's "Emerging Explorers". Wood's group is also dedicated to STEM education by using novel robots to motivate young students to pursue careers in science and engineering.

 

September 25, Matei Ciocarlie, Columbia University How to Make, Sense, and Make Sense of Contact in Robotic Manipulation (video - intermitent audio for the first ~20mins)

Dexterous manipulation is a key open problem for many new robotic applications, owing in great measure to the difficulty of dealing with transient contact. From an analytical standpoint, intermittent frictional contact (the essence of manipulation) is difficult to model, as it gives rise to non convex problems with no known efficient solvers. Contact is also difficult to sense, particularly with sensors integrated in a mechanical package that must also be compact, highly articulated and appropriately actuated (i.e. a robot hand). Articulation and actuation present their own challenges: a dexterous hand comes with a high-dimensional posture space, difficult to design, actuate, and control. In this talk, I will present our work trying to address these challenges: analytical models of grasp stability (with realistic energy dissipation constraints), design and use of sensors (tactile and proprioceptive) for manipulation, and hand posture subspaces (for design optimization and teleoperation). These are stepping stones towards achieving versatile robotic manipulation, needed by applications as diverse as logistics, manufacturing, disaster response and space robots.

 

Bio: Matei Ciocarlie is an Associate Professor of Mechanical Engineering at Columbia University. His current work focuses on robot motor control, mechanism and sensor design, planning and learning, all aiming to demonstrate complex motor skills such as dexterous manipulation. Matei completed his Ph.D. at Columbia University in New York; before joining the faculty at Columbia, he was a Research Scientist and Group Manager at Willow Garage, Inc., a privately funded Silicon Valley robotics research lab, and then a Senior Research Scientist at Google, Inc. In recognition of his work, Matei has been awarded the Early Career Award by the IEEE Robotics and Automation Society, a Young Investigator Award by the Office of Naval Research, a CAREER Award by the National Science Foundation, and a Sloan Research Fellowship by the Alfred P. Sloan Foundation.

 

Spring 2018 Campus-wide Robotics Seminar (sponsored by Aurora Flight Sciences , The MathWorks , and the Russell Sage Foundation

May 29, Kenneth Salisbury, Stanford Who Doesn't Want Another Arm? (video, CSAIL only)

Our laboratory has been developing wearable robot arms. They are designed to augment your capabilities and dexterity through physical cooperation. These "Third Arms" are typically waist-mounted and designed to work in the volume directly in front of you, cooperating with your arms' actions. We are not developing fast or strong robots, rather we focus on the interaction design issues and variety of task opportunities that arise. Putting a robot directly in your personal space enables new ways for human-robot cooperation. Ours are designed to contact the environment with all their surfaces. This enables "whole-arm manipulation" as well as end-effector-based actions. How should you communicate with such a robot? How do you it teach it physical tasks? Can it learn by observing things you do and anticipate helpful actions? In this talk I will describe our work on wearables and discuss the design process leading to current embodiments. Be prepared to tell me what your favorite 3rd arm task is!!

Bio: Professor Salisbury received his Ph.D. from Stanford in 1982. That fall he arrived at MIT for a one year post-doc. He says he was having so much fun that he ended up spending the next 16 years at the 'tute. He then spent four years at Intuitive Surgical helping develop the first-gen da Vinci robot. He then returned to Stanford to become a Professor in the Departments of Computer Science and Surgery. He and his students have been responsible for a number of seminal technologies including the Salisbury Hands, the PHANToM Haptic Interface, the MIT WAM/Barrett Arm, the da Vinci Haptic Interface, the Silver Falcon Medical Robot, implicit surface and polyhedral haptic rendering techniques, the JPL Force Reflecting Hand Controller, and other devices. Kenneth is inventor or co-inventor on over 50 patents on robotics, haptics, sensors, rendering, UI and other topics. His current research interests include arm design, active physical perception, high fidelity haptics. In his spare time, he plays the flute a lot and makes things.

May 1, David Lentink, Stanford Avian Inspired Design (video, CSAIL only)

Many organisms fly in order to survive and reproduce. My lab focusses on understanding bird flight to improve flying robots—because birds fly further, longer, and more reliable in complex visual and wind environments. I use this multidisciplinary lens that integrates biomechanics, aerodynamics, and robotics to advance our understanding of the evolution of flight more generally across birds, bats, insects, and autorotating seeds. The development of flying organisms as an individual and their evolution as a species are shaped by the physical interaction between organism and surrounding air. The organism’s architecture is tuned for propelling itself and controlling its motion. Flying animals and plants maximize performance by generating and manipulating vortices. These vortices are created close to the body as it is driven by the action of muscles or gravity, then are ‘shed’ to form a wake (a trackway left behind in the fluid). I study how the organism’s architecture is tuned to utilize these and other aeromechanical principles to compare the function of bird wings to that of bat, insect, and maple seed wings. The experimental approaches range from making robotic models to training birds to fly in a custom-designed wind tunnel as well as in visual flight arena’s—and inventing methods to 3D scan birds and measure the aerodynamic force they generate—nonintrusively—with a novel aerodynamic force platform. The studies reveal that animals and plants have converged upon the same solution for generating high lift: A strong vortex that runs parallel to the leading edge of the wing, which it sucks upward. Why this vortex remains stably attached to flapping animal and spinning plant wings is elucidated and linked to kinematics and wing morphology. While wing morphology is quite rigid in insects and maple seeds, it is extremely fluid in birds. I will show how such ‘wing morphing’ significantly expands the performance envelope of birds during flight, and will dissect the mechanisms that enable birds to morph better than any aircraft can. Finally, I will show how these findings have inspired my students to design new flapping and morphing aerial robots.

Bio: Professor Lentink's multidisciplinary lab studies how birds fly to develop better flying robots—integrating biomechanics, fluid mechanics, and robot design. He has a BS and MS in Aerospace Engineering (Aerodynamics, Delft University of Technology) and a PhD in Experimental Zoology cum laude (Wageningen University). During his PhD he visited the California institute of Technology for 9 months to study insect flight. His postdoctoral training at Harvard was focused on studying bird flight. Publications range from technical journals to cover publications in Nature and Science. He is an alumnus of the Young Academy of the Royal Netherlands Academy of Arts and Sciences, recipient of the Dutch Academic Year Prize, the NSF CAREER award, he has been recognized in 2013 as one of 40 scientists under 40 by the World Economic Forum, and he is the inaugural winner of the Steven Vogel Young Investigator Award from the Journal Bioinspiration & Biomimetics for early career brilliance.

April 24, Mykel Kochenderfer, Stanford Building Trust in Decision Support Systems for Aerospace (video - audio starts ~2min 20sec, CSAIL only)

Starting in the 1970s, decades of effort went into building human-designed rules for providing automatic maneuver guidance to pilots to avoid mid-air collisions. The resulting system was later mandated worldwide on all large aircraft and significantly improved the safety of the airspace. Recent work has investigated the feasibility of using computational techniques to help derive optimized decision logic that better handles various sources of uncertainty and balances competing system objectives. This approach has resulted in a system called Airborne Collision Avoidance System (ACAS) X that significantly reduces the risk of mid-air collision while also reducing the alert rate, and it is in the process of becoming the next international standard. Using ACAS X as a case study, this talk will discuss lessons learned about building trust in advanced decision support systems. This talk will also outline research challenges in facilitating greater levels of automation into safety critical systems.

 

Bio: Mykel Kochenderfer is Assistant Professor of Aeronautics and Astronautics at Stanford University. He is the director of the Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. Of particular interest are systems for air traffic control, unmanned aircraft, and other aerospace applications where decisions must be made in uncertain, dynamic environments while maintaining safety and efficiency. Prior to joining the faculty, he was at MIT Lincoln Laboratory where he worked on airspace modeling and aircraft collision avoidance, with his early work leading to the establishment of the ACAS X program. He received a Ph.D. from the University of Edinburgh and B.S. and M.S. degrees in computer science from Stanford University.  He is the author of "Decision Making under Uncertainty: Theory and Application" from MIT Press. He is a third generation pilot.

April 10, Hanu Singh, Northeastern One fish, two fish: The role of Robotics in Fisheries Stock Assessment (video, CSAIL only)

Fisheries stock have been decimated around the world. In this talk we examine the role of Robotics to help us assess fish stock. Unlike traditional methods Robotics hold the promise of yielding assessment methods that do not rely on actually catching fish. The complexities of Robotics for fish counting however include complexities associated with imaging underwater, fish avoidance and attraction, and the role of camouflage in fish predator prey interactions. This talks looks at how we are coming to grips with these issues and how the insights gained in tackling these problems have spilled out to other areas of robotics including edge cases to do with autonomous driving.

 

Bio: Hanumant Singh is a Professor at Northeastern University where he is also the Director of the multidisciplinary Center for Robotics at NU. He received his Ph.D. from the MIT WHOI Joint Program in 1995 after which he worked on the Staff at WHOI until 2016 when he joined Northeastern. His group designed and built the Seabed AUV, as well as the Jetyak Autonomous Surface Vehicle dozens of which are in use for scientific and other purposes across the globe. He has participated in 60 expeditions in all of the world's oceans in support of Marine Geology, Marine Biology, Deep Water Archaeology, Chemical Oceanography, Polar Studies, and Coral Reef Ecology.

 

April 3, Geoff Hollinger, Oregon State Marine Robotics: Planning, Decision Making, and Learning (video)

Underwater gliders, propeller-driven submersibles, and other marine robots are increasingly being tasked with gathering information (e.g., in environmental monitoring, offshore inspection, and coastal surveillance scenarios). However, in most of these scenarios, human operators must carefully plan the mission to ensure completion of the task. Strict human oversight not only makes such deployments expensive and time consuming but also makes some tasks impossible due to the requirement for heavy cognitive loads or reliable communication between the operator and the vehicle. We can mitigate these limitations by making the robotic information gatherers semi-autonomous, where the human provides high-level input to the system and the vehicle fills in the details on how to execute the plan. In this talk, I will show how a general framework that unifies information theoretic optimization and physical motion planning makes semi-autonomous information gathering feasible in marine environments. I will leverage techniques from stochastic motion planning, adaptive decision making, and deep learning to provide scalable solutions in a diverse set of applications such as underwater inspection, ocean search, and ecological monitoring. The techniques discussed here make it possible for autonomous marine robots to “go where no one has gone before,” allowing for information gathering in environments previously outside the reach of human divers.

 

Bio: Geoff Hollinger is an Assistant Professor in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute at Oregon State University. His current research interests are in adaptive information gathering, distributed coordination, and learning for autonomous robotic systems. He has previously held research positions at the University of Southern California, Intel Research Pittsburgh, University of Pennsylvania’s GRASP Laboratory, and NASA's Marshall Space Flight Center. He received his Ph.D. (2010) and M.S. (2007) in Robotics from Carnegie Mellon University and his B.S. in General Engineering along with his B.A. in Philosophy from Swarthmore College (2005). He is a recipient of the 2017 Office of Naval Research Young Investigator Program (YIP) award.

 

March 20, Michael Beetz, University of Bremen Everyday Activity Science and Engineering (EASE) (video)

Recently we have witnessed the first robotic agents performing everyday manipulation activities such as loading a dishwasher and setting a table. While these agents successfully accomplish specific instances of these tasks, they only perform them within the narrow range of conditions for which they have been carefully designed. They are still far from achieving the human ability to autonomously perform a wide range of everyday tasks reliably in a wide range of contexts. In other words, they are far from mastering everyday activities. Mastering everyday activities is an important step for robots to become the competent (co-)workers, assistants, and companions who are widely considered a necessity for dealing with the enormous challenges our aging society is facing.

We propose Everyday Activity Science and Engineering (EASE), a fundamental research endeavour to investigate the cognitive information processing principles employed by humans to master everyday activities and to transfer the obtained insights to models for autonomous control of robotic agents. The aim of EASE is to boost the robustness, efficiency, and flexibility of various information processing subtasks necessary to master everyday activities by uncovering and exploiting the structures within these tasks.

Everyday activities are by definition mundane, mostly stereotypical, and performed regularly. The core research hypothesis of EASE is that robots can achieve mastery by exploiting the nature of everyday activities. We intend to investigate this hypothesis by focusing on two core principles: The first principle is narrative-enabled episodic memories (NEEMs), which are data structures that enable robotic agents to draw knowledge from a large body of observations, experiences, or descriptions of activities. The NEEMs are used to find representations that can exploit the structure of activities by transferring tasks into problem spaces that are computationally easier to handle than the original spaces. These representations are termed pragmatic everyday activity manifolds (PEAMs), analogous to the concept of manifolds as low-dimensional local representations in mathematics. The exploitation of PEAMs should enable agents to achieve the desired task performance while preserving computational feasibility.

The vision behind EASE is a cognition-enabled robot capable of performing human-scale everyday manipulation tasks in the open world based on high-level instructions and mastering them.

 

Bio: Michael Beetz is a professor for Computer Science at the Faculty for Mathematics & Informatics of the University Bremen and head of the Institute for Artificial Intelligence (IAI). IAI investigates AI-based control methods for robotic agents, with a focus on human-scale everyday manipulation tasks. With his openEASE, a web-based knowledge service providing robot and human activity data, Michael Beetz aims at improving interoperability in robotics and lowering the barriers for robot programming. Due to this the IAI group provides most of its results as open-source software, primarily in the ROS software library.

Michael Beetz received his diploma degree in Computer Science with distinction from the University of Kaiserslautern. His MSc, MPhil, and PhD degrees were awarded by Yale University in 1993, 1994, and 1996 and his Venia Legendi from the University of Bonn in 2000. Michael Beetz was a member of the steering committee of the European network of excellence in AI planning (PLANET) and coordinating the research area “robot planning”. He is associate editor of the AI Journal. His research interests include plan-based control of robotic agents, knowledge processing and representation for robots, integrated robot learning, and cognitive perception.

 

March 6, Marco Pavone, Stanford Planning and Decision Making for Autonomous Spacecraft and Space Robots (video)

In this talk I will present planning and decision-making techniques for safely and efficiently maneuvering autonomous aerospace vehicles during proximity operations, manipulation tasks, and surface locomotion. I will first address the "spacecraft motion planning problem," by discussing its unique aspects and presenting recent results on planning under uncertainty via Monte Carlo sampling. I will then turn the discussion to higher-level decision making; in particular, I will discuss an axiomatic theory of risk and how one can leverage such a theory for a principled and tractable inclusion of risk-awareness in robotic decision making, in the context of Markov decision processes and reinforcement learning. Throughout the talk, I will highlight a variety of space-robotic applications my research group is contributing to (including the Mars 2020 and Hedgehog rovers, and the Astrobee free-flying robot), as well as applications to the automotive and UAV domains.

This work is in collaboration with NASA JPL, NASA Ames, NASA Goddard, and MIT.

 

Bio: Dr. Marco Pavone is an Assistant Professor of Aeronautics and Astronautics at Stanford University, where he is the Director of the Autonomous Systems Laboratory and Co-Director of the Center for Automotive Research at Stanford. Before joining Stanford, he was a Research Technologist within the Robotics Section at the NASA Jet Propulsion Laboratory. He received a Ph.D. degree in Aeronautics and Astronautics from the Massachusetts Institute of Technology in 2010. His main research interests are in the development of methodologies for the analysis, design, and control of autonomous systems, with an emphasis on autonomous aerospace vehicles and large-scale robotic networks. He is a recipient of a Presidential Early Career Award for Scientists and Engineers, an ONR YIP Award, an NSF CAREER Award, a NASA Early Career Faculty Award, a Hellman Faculty Scholar Award, and was named NASA NIAC Fellow in 2011. His work has been recognized with best paper nominations or awards at the Field and Service Robotics Conference, at the Robotics: Science and Systems Conference, and at NASA symposia.

 

Fall  2017 Campus-wide Robotics Seminar (sponsored by Aurora Flight Sciences , The MathWorks , and the Russell Sage Foundation )  (11am-noon in 32-G449)

December 15, Sanjay Krishnan, UC Berkeley Dirty Data, Robotics, and Artificial Intelligence

Large training datasets have revolutionized AI research, but enabling similar breakthroughs in other fields, such as Robotics, requires a new understanding of how to acquire, clean, and structure emergent forms of large-scale, unstructured sequential data. My talk presents a systematic approach to handling such dirty data in the context of modern AI applications. I start by introducing a statistical formalization on data cleaning in this setting including research on: (1) how common data cleaning operations affect model training, (2) how data cleaning programs can be expected to generalize to unseen data, (3) and how to prioritize limited human intervention in rapidly growing datasets. Then, using surgical robotics as a motivating example, I present a series of robust Bayesian models for automatically extracting hierarchical structure from highly varied and noisy robot trajectory data facilitating imitation learning and reinforcement learning on short, consistent sub-p! roblems. I present how the combination of clean training data and structured learning tasks enables learning highly accurate control policies in tasks ranging from surgical cutting to debridement.

 

Bio: Sanjay Krishnan is a Computer Science PhD candidate in the RISELab and in the Berkeley Laboratory for Automation Science and Engineering at UC Berkeley. His research studies problems on the intersection of database theory, machine learning, and robotics. Sanjay's work has received a number of awards including the 2016 SIGMOD Best Demonstration award, 2015 IEEE GHTC Best Paper award, and Sage Scholar award.

 

December 5, Cynthia Breazeal, MIT Toward Personal Robots and Daily Life (** POSTPONED TO THE SPRING **)

In this informal talk, I will present an overview of the research program of the Personal  Robots Group at the Media Lab. In particular, I will highlight a number of projects where we are developing, fielding, and assessing social robots over repeated encounters with people in real world environments such as the home, schools, or hospitals. We develop adaptive algorithmic capabilities for the robot to support sustained interpersonal engagement and personalization to support specific interventions. We then examine the impact of the robot’s social embodiment, non-verbal and emotive expression, and personalization on sustaining engagement, learning, behavior, and attitudes. I will also touch on the commercialization of social robots as a mass consumer product and developer platform, with opportunities to support research in long-term human-robot interaction in the context of daily life. In a time where citizens have the opportunity to live with intelligent machines, we have the opportunity to explore, develop and assess humanistic design principles to support and promote human flourishing at all ages and stages.

 

Bio: Dr. Cynthia Breazeal is an Associate Professor at MIT where she founded and directs the Personal Robots Group at the Media Lab. She is also founder and Chief Scientist of Jibo, Inc. Dr. Breazeal is recognized as a pioneer of Social Robotics and Human Robot Interaction. She is also internationally recognized as an award-winning innovator, designer, and entrepreneur such as Technology Review’s TR35 Award, TIME magazine’s Best Inventions, and the National Design Awards in Communication.

 

November 28, Gaurav S. Sukhatme, USC Robots at Sea (video, no audio)

Underwater robotics is undergoing a transformation. Advances in AI and machine learning are enabling a new generation of underwater robots to make intelligent decisions (where to sample? how to navigate?) by reasoning about their environment (what is the shipping and water forecast?). At USC, we are engaged in a long-term effort to explore ideas and develop algorithms that will lead to persistent, autonomous underwater robots. In this talk, I will discuss some of our recent results focusing on two problems in adaptive sampling: underwater change detection and biological sampling. Time permitting; I will also present our work on hazard avoidance, allowing underwater robots to operate in regions where there is substantial ship traffic.

 

Bio: Gaurav S. Sukhatme is the Fletcher Jones Professor of Computer Science and Electrical Engineering at the University of Southern California (USC). He currently serves as the Executive Vice Dean of the USC Viterbi School of Engineering. His research is in networked robots with applications to aquatic robots and on-body networks. Sukhatme has published extensively in these areas and served as PI on numerous federal grants. He is Fellow of the IEEE and a recipient of the NSF CAREER and the Okawa foundation research awards. He is one of the founders of the RSS conference, serves on the RSS Foundation Board, and has served as program chair of three major robotics conferences (ICRA, IROS and RSS). He is the Editor-in-Chief of the Springer journal Autonomous Robots.

 

November 14, Rob MacCurdy, MIT Multicellular Machines: A Bio-inspired approach to automated electromechanical design and fabrication (video)

Designing and building robots is a labor-intensive process that requires experts at all stages. This reality is due, in part, to the fact that the robot design-space is unbounded. To address this issue, I have borrowed a simple but powerful design concept from multi-cellular organisms: the regular tiling of a relatively small number of individual cell types yields assemblies with spectacular functional capacity. This capability comes at the cost of substantial complexity in design synthesis and assembly, which nature has addressed via evolutionary search and developmental processes. I will describe my application of these ideas to electromechanical systems, which has led to the development of electro-mechanical “cell” types, automated assembly methods, and design synthesis tools. The inspiration for this work comes from ongoing collaborations with Ecologists and Evolutionary Biologists. As part of this effort I have developed wildlife monitoring tools that provide unprecedented volumes of data, enabling previously intractable scientific studies of small organisms. Sensor mass, which is dominated by energy-storage, is the primary constraint for these applications, and I will discuss a time-of-arrival tracking system that is 3 orders of magnitude more energy-efficient than equivalent position tracking methods.

 

Bio: Dr. Robert MacCurdy is a Postdoctoral Associate with Daniela Rus at MIT and will be an assistant professor at the University of Colorado Boulder in January 2018. He is developing new methods to automatically design and manufacture robots. As part of this work, he developed an additive manufacturing process, Printable Hydraulics, that incorporates liquids into 3D-printed parts as they are built, allowing hydraulically-actuated robots to be automatically fabricated. Rob did his PhD work with Hod Lipson at Cornell University where he developed materials and methods to automatically design and build electromechanical systems using additive manufacturing and digital materials. Funded by an NSF graduate research fellowship and a Liebmann Fund fellowship, this work demonstrated systems capable of automatically assembling functional electromechanical devices, with the goal of printing robots that literally walk out of the printer. Rob is also committed to developing research tools that automate the study and conservation of wildlife, work that he began while working as a research engineer at Cornell’s Lab of Ornithology. He holds a B.A. in Physics from Ithaca College, a B.S. in Electrical Engineering from Cornell University, and an M.S. and PhD in Mechanical Engineering from Cornell University.

 

November 7, Charlie Kemp, Georgia Tech Mobile Manipulators for Intelligent Physical Assistance (video)

Since founding the Healthcare Robotics Lab at Georgia Tech 10 years ago, my research has focused on developing mobile manipulators for intelligent physical assistance. Mobile manipulators are mobile robots with the ability to physically manipulate their surroundings. They offer a number of distinct capabilities compared to other forms of robotic assistance, including being able to operate independently from the user, being appropriate for users with diverse needs, and being able to assist with a wide variety of tasks, such as object retrieval, hygiene, and feeding. My lab has worked with hundreds of representative end users - including older adults, nurses, and people with severe motor impairments - to better understand the challenges and opportunities associated with this technology. In my talk, I will provide evidence for the following assertions: 1) many people will be open to assistance from mobile manipulators; 2) assistive mobile manipulation at home is feasible for people with profound motor impairments using off-the-shelf computer access devices; and 3) permitting contact and intelligently controlling forces increases the effectiveness of mobile manipulators. I will conclude with a brief overview of some of our most recent research.

 

Bio: Charles C. Kemp (Charlie) is an Associate Professor at the Georgia Institute of Technology in the Department of Biomedical Engineering with adjunct appointments in the School of Interactive Computing and the School of Electrical and Computer Engineering. He earned a doctorate in Electrical Engineering and Computer Science (2005), an MEng, and BS from MIT. In 2007, he joined the faculty at Georgia Tech where he directs the Healthcare Robotics Lab ( http://healthcare-robotics.com ). He is an active member of Georgia Tech’s Institute for Robotics & Intelligent Machines (IRIM) and its multidisciplinary Robotics Ph.D. program. He has received a 3M Non-tenured Faculty Award, the Georgia Tech Research Corporation Robotics Award, a Google Faculty Research Award, and an NSF CAREER award. He was a Hesburgh Award Teaching Fellow in 2017. His research has been covered extensively by the popular media, including the New York Times, Technology Review, ABC, and CNN.

 

October 31, Jeff Mahler, UC Berkeley The Dexterity Network: Deep Learning to Plan Robust Robot Grasps using Datasets of Synthetic Point Clouds, Analytic Grasp Metrics, and 3D Object Models (video)

 

Reliable robot grasping across a wide variety of objects is challenging due to imprecision in sensing, which leads to uncertainty about properties such as object shape, pose, mass, and friction. Recent results suggest that deep learning from millions of labeled grasps and images can be used to rapidly plan successful grasps across a diverse set of objects without explicit inference of physical properties, but training typically requires tedious hand-labeling or months of execution time. In this talk I present the Dexterity Network (Dex-Net), a framework to automatically synthesize training datasets containing millions of point clouds and robot grasps labeled with robustness to perturbations by analyzing contact models across thousands of 3D object CAD models. I will describe generative models for datasets of both parallel-jaw and suction-cup grasps. Experiments suggest that Convolutional Neural Networks trained from scratch on Dex-Net datasets can be used to plan grasps for novel objects in clutter with high precision on a physical robot.

 

Bio: Jeff Mahler is a Ph.D. student at the University of California at Berkeley advised by Prof. Ken Goldberg and a member of the the AUTOLAB and Berkeley Artificial Intelligence Research Lab. His current research is on the Dexterity Network (Dex-Net), a project that aims to train robot grasping policies from massive synthetic datasets of labeled point clouds and grasps generated using stochastic contact analysis across thousands of 3D object CAD models. He has also studied deep learning from demonstration and control for surgical robots. He received the National Defense Science and Engineering Fellowship in 2015 and cofounded the 3D scanning startup Lynx Laboratories in 2012 as an undergraduate at the University of Texas at Austin.

 

October 17, Hadas Kress-Gazit, Cornell University Synthesis for robots: guarantees and feedback for complex behaviors (video)

Getting a robot to perform a complex task, for example completing the DARPA Robotics Challenge, typically requires a team of engineers who program the robot in a time consuming and error prone process and who validate the resulting robot behavior through testing in different environments. The vision of synthesis for robotics is to bypass the manual programming and testing cycle by enabling users to provide specifications – what the robot should do – and automatically generating, from the specification, robot control that provides guarantees for the robot’s behavior.

In this talk I will describe the work done in my group towards realizing the synthesis vision. I will discuss what it means to provide guarantees for physical robots, types of feedback we can generate, specification formalisms that we use and our approach to synthesis for different robotic systems such as modular robots and multi robot systems.

Bio: Hadas Kress-Gazit is an Associate Professor at the Sibley School of Mechanical and Aerospace Engineering at Cornell University. She received her Ph.D. in Electrical and Systems Engineering from the University of Pennsylvania in 2008 and has been at Cornell since 2009. Her research focuses on formal methods for robotics and automation and more specifically on synthesis for robotics - automatically creating verifiable robot controllers for complex high-level tasks. Her group explores different types of robotic systems including modular robots, soft robots and swarms and synthesizes (pun intended) ideas from different communities such as robotics, formal methods, control, hybrid systems and computational linguistics. She received an NSF CAREER award in 2010, a DARPA Young Faculty Award in 2012 and the Fiona Ip Li '78 and Donald Li '75 Excellence in teaching award in 2013. She lives in Ithaca with her partner and two kids.

 

October 3, Spring Berman, Arizona State University A Control and Estimation Framework for Robotic Swarms in Uncertain Environments (video - no audio)

Robotic “swarms” comprising tens to thousands of robots have the potential to greatly reduce human workload and risk to human life. In many scenarios, the robots will lack global localization, prior data about the environment, and reliable communication, and they will be restricted to local sensing and signaling.  We are developing a rigorous control and estimation framework for swarms that are subject to these constraints. This framework will enable swarms to operate largely autonomously, with user input consisting only of high-level directives. In this talk, I describe our work on various aspects of the framework, including scalable strategies for coverage, mapping, scalar field estimation, and cooperative manipulation. We use stochastic and deterministic models from chemical reaction network theory and fluid dynamics to describe the robots’ roles, state transitions, and motion at both the microscopic (individual) and macroscopic (population) levels. We also employ techniques from algebraic topology, nonlinear control theory, and optimization, and we model analogous behaviors in ant colonies to identify robot controllers that yield similarly robust performance. We are validating our framework on small mobile robots, called “Pheeno,” that we have designed to be low-cost, customizable platforms for multi-robot research and education.

Bio: Spring Berman is an assistant professor of Mechanical and Aerospace Engineering at Arizona State University (ASU), where she directs the Autonomous Collective Systems (ACS) Laboratory. She received the B.S.E. degree in Mechanical and Aerospace Engineering from Princeton University in 2005 and the Ph.D. degree in Mechanical Engineering and Applied Mechanics from the University of Pennsylvania (GRASP Laboratory) in 2010. From 2010 to 2012, she was a postdoctoral researcher in Computer Science at Harvard University. Her research focuses on controlling swarms of resource-limited robots with stochastic behaviors to reliably perform collective tasks in realistic environments. She was a recipient of the 2014 DARPA Young Faculty Award and the 2016 ONR Young Investigator Award. She currently serves as the associate director of the newly established ASU Center for Human, Artificial Intelligence, and Robotic Teaming.

 

September 19, Marc Toussaint, University of Stuttgart Sufficient Symbols to Make Optimization-Based Manipulation Planning Tractable (video)

Not only is combined Task and Motion Planning a hard problem, it also relies on an appropriate symbolic representation to describe the task level, which to find is perhaps an even more fundamental problem. I first briefly report on our work that considered symbol learning and, relatedly, manipulation skill learning on its own. However, I now believe that what are appropriate abstractions should depend on their use in higher level planning. I will introduce Logic-Geometric Programming as a framework in which the role of the symbolic level is to make optimization over complex manipulation paths tractable. Similarly to enumerating categorical aspects of an otherwise infeasible problem, such as enumerating homotopy classes in path planning, or local optima in general optimization. I then report on recent results we got with this framework for combined task and motion planning and human-robot cooperative manipulation planning.

 

Bio: Marc Toussaint currently visiting scholar at CSAIL until summer 2018. He is full professor for Machine Learning and Robotics at the University of Stuttgart since 2012. Before he was assistant professor and leading an Emmy Noether research group at FU & TU Berlin. His research focuses on the combination of decision theory and machine learning, motivated by fundamental research questions in robotics. Specific interests include combining geometry, logic and probabilities in learning and reasoning, and appropriate representations and priors for real-world manipulation learning.

 

Spring 2017 Campus-wide Robotics Seminar (sponsored by Aurora Flight Sciences , The MathWorks , and the Russell Sage Foundation )  (11am-noon in 32-G449)

May 16, Aaron Courville, Université de Montréal Adversarial learning for generative models and inference (video)

Generative Adversarial Networks (GANs) pose the learning of a generative model as an adversarial game between a discriminator, trained to distinguish true and generated samples, and a generator, trained to try to fool the discriminator. Since their introduction in 2014, GANs have been the subject of a surge of research activity, due to their ability to produce realistic samples of highly structured data such as natural images.

In this talk I will present a brief introduction to Generative Adversarial Networks (GANs), and discuss some of our recent work in improving the stability of training of GAN models. I will also describe our recent work on adversarially learned inference (ALI), which jointly learns a generation network and an inference network using a GAN-like adversarial process. In ALI, the generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent/data-space samples from the generative network and joint samples from the inference network.

 

Bio: Aaron Courville is an Assistant Professor in the Department of Computer Science and Operations Research (DIRO) at the University of Montreal, and member of MILA, the Montreal Institute for Learning Algorithms. His research interests focus on deep learning, generative models and applications such as multimodal data, vision and dialogue. He also recently co-wrote a published a textbook on ``Deep Learning''. He work has been recognized with Focussed Research Award from Google and other research sponsors including Microsoft, Intel, Samsung and IBM.

 

May 9, Michael Posa, MIT Thesis Defense: Optimization for Control and Planning of Multi-contact Dynamic Motion (video)

Whether a robot is assisting a person to move about the home, or packing containers in a warehouse, the fundamental promise of robotics centers on the ability to productively interact with a complex and changing environment in a safe and controlled fashion. However, current robots are largely limited to basic tasks in structured environments--operating slowly and cautiously, afraid of any incidental contact with the outside world. Dynamic interaction, encompassing both legged locomotion and manipulation, poses significant challenges to traditional control and planning techniques. Discontinuities from impact events and dry friction make standard tools poorly suited in scenarios with complex or uncertain contacts between robot and environment. I will present approaches that leverage the interplay between numerical optimization and the mathematical structure of contact dynamics to avoid the combinatorial complexity of mode enumeration. This will include a tractable algorithm for trajectory optimization, without an a priori encoding of the contact sequence, and an approach utilizing sums-of-squares programming to design and provably verify controllers that stabilize systems making and breaking contact.

 

May 2, Robert Platt, Northeastern University Robotic Manipulation Without Geometric Models (video)

Most approaches to planning for robotic manipulation take a geometric description of the world and the objects in it as input. Unfortunately, despite successes in SLAM, estimating the geometry of the world from sensor data can be challenging. This is particularly true in open world scenarios where we have little prior information about the geometry or appearance of the objects to be handled. This is a problem because even small modelling errors can cause a grasp or manipulation operation to fail. In this talk, I will describe some recent work on approaches to robotic manipulation that eschew geometric models. Our recent results show that these methods excel on manipulation tasks involving novel objects presented in dense clutter.

 

Bio: Dr. Robert Platt is an Assistant Professor of Computer Science at Northeastern University. Prior to coming to Northeastern, he was a Research Scientist at MIT and a technical lead at NASA Johnson Space Center, where he helped develop the control and autonomy subsystems for Robonaut 2, the first humanoid robot in space.

 

Apr 25, Jonathan Clark, Florida State University Dynamics, Design, and Control of Legged Robots that Rapidly Run and Climb (video, CSAIL only)

Finely tuned robotic limb systems that explicitly exploit their body’s natural dynamics have begun to rival specific performance criteria, such as speed over smooth terrain, of the most accomplished biological systems. The earliest successful robot implementations however, used only very specialized designs with a very limited number of active degrees of freedom. While more flexible, higher degree-of-freedom designs have been around for some time they have usually been restricted to comparatively slow speeds or manipulation of light-weight objects. The design of fast, dynamic multi-purpose robots has been stymied by the limitation of available mechanical actuators and the complexity of the design and control of these systems. This talk will describe recent efforts to understand how to effectively design robotic limbs to enable dynamic motions in multiple modalities, specifically high-speed running on horizontal and vertical surfaces.

 

Bio: Jonathan Clark received his BS in Mechanical Engineering from Brigham Young University and his MS and PhD from Stanford University. Dr. Clark worked as an IC Postdoctoral Fellow at the GRASP lab at the University of Pennsylvania, and is currently an associate professor at the FAMU/FSU College of Engineering in the Department of Mechanical Engineering. During Dr. Clark’s career he has worked on a wide range of dynamic legged robotic systems including the Sprawl and RHex families of running robots, as well as the world’s first dynamical and fastest legged climbing robot Dynoclimber. In 2014, he received an NSF CAREER award for work on rotational dynamics for improved legged locomotion. His recent work has involved the development of multi-modal robots that can operate in varied terrain by running, climbing and flying. He currently serves as the associate director of the Center of Intelligent Systems, Control, and Robotics (CISCOR) and the director of the STRIDe lab.

 

Apr 18, Julie Shah, MIT Enhancing Human Capability with Intelligent Machine Teammates

Every team has top performers -- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. In this talk I share recent work investigating effective ways to blend the unique decision-making strengths of humans and machines. I discuss the development of computational models that enable machines to efficiently infer the mental state of human teammates and thereby collaborate with people in richer, more flexible ways. Our studies demonstrate statistically significant improvements in people’s performance on military, healthcare and manufacturing tasks, when aided by intelligent machine teammates.

 

Bio: Julie Shah is an Associate Professor of Aeronautics and Astronautics at MIT and director of the Interactive Robotics Group, which aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. As a current fellow of Harvard University's Radcliffe Institute for Advanced Study, she is expanding the use of human cognitive models for artificial intelligence. She has translated her work to manufacturing assembly lines, healthcare applications, transportation and defense. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. Prof. Shah has been recognized by the National Science Foundation with a Faculty Early Career Development (CAREER) award and by MIT Technology Review on its 35 Innovators Under 35 list. Her work on industrial human-robot collaboration was also in Technology Review’s 2013 list of 10 Breakthrough Technologies. She has received international recognition in the form of best paper awards and nominations from the ACM/IEEE International Conference on Human-Robot Interaction, the American Institute of Aeronautics and Astronautics, the Human Factors and Ergonomics Society, the International Conference on Automated Planning and Scheduling, and the International Symposium on Robotics. She earned degrees in aeronautics and astronautics and in autonomous systems from MIT.

 

Apr 11, Sonia Chernova, Georgia Tech Toward Resilient Robot Autonomy through Learning, Interaction and Semantic Reasoning (video)

Robotics is undergoing an exciting transition from factory automation to the deployment of autonomous systems in less structured environments, such as warehouses, hospitals and homes. One of the critical barriers to the wider adoption of autonomous robotic systems in the wild is the challenge of achieving reliable autonomy in complex and changing human environments.  In this talk, I will discuss ways in which innovations in learning from demonstration and remote access technologies can be used to develop and deploy autonomous robotic systems alongside and in collaboration with human partners.  I will present applications of this research paradigm to robot learning, object manipulation and semantic reasoning, as well as explore some exciting avenues for future research in this area.

 

Bio: Sonia Chernova is the Catherine M. and James E. Allchin Early-Career Assistant Professor in the School of Interactive Computing at Georgia Tech, where she directs the Robot Autonomy and Interactive Learning research lab.  She received B.S. and Ph.D. degrees in Computer Science from Carnegie Mellon University, and held positions as a Postdoctoral Associate at the MIT Media Lab and as Assistant Professor at Worcester Polytechnic Institute prior to joining Georgia Tech in August 2015.  Prof. Chernova’s research focuses on developing robots that are able to effectively operate in human environments. Her work spans robotics and artificial intelligence, including semantic reasoning, adjustable autonomy, human computation and cloud robotics. She is the recipient of the NSF CAREER, ONR Young Investigator and NASA Early Career Faculty awards.

 

April 4, Jon Kelly, University of Toronto Getting More from What You've Already Got: Improving Stereo Visual Odometry Using Deep Visual Illumination Estimation (video, CSAIL only)

Visual navigation is essential for many successful robotics applications. Visual odometry (VO), an incremental dead reckoning technique, in particular, has been widely employed on many platforms, including the Mars Exploration Rovers and the Mars Science Laboratory. However, a drawback of this visual motion estimation approach is that it exhibits superlinear growth in positioning error with time, due in large part to orientation drift.

In this talk, I will describe recent work in our group on a method to incorporate global orientation information from the sun into a visual odometry (VO) pipeline, using data from the existing image stream only. This is challenging in part because the sun is typically not visible in the input images. Our work leverages recent advances in Bayesian Convolutional Neural Networks (BCNNs) to train and implement a sun detection model (dubbed Sun-BCNN) that infers a three-dimensional sun direction vector from a single RGB image. Crucially, the technique also computes a principled uncertainty associated with each prediction, using a Monte Carlo dropout scheme. We incorporate this uncertainty into a sliding window stereo VO pipeline where accurate uncertainty estimates are critical for optimal data fusion.

I will present the results of our evaluation on the KITTI odometry benchmark, where significant improvements are obtained over ‘vanilla’ VO. I will also describe additional experimental evaluation on 10 km of navigation data from Devon Island in the Canadian High Arctic, at a Mars analogue site. Finally, I will give an overview of our analysis of the sensitivity of the model to cloud cover, and discuss the possibility of model transfer between urban and planetary analogue environments.

 

Bio: Dr. Kelly is an Assistant Professor at the University of Toronto Institute for Aerospace Studies, where he directs the Space & Terrestrial Autonomous Robotic Systems (STARS) Laboratory. Prior to joining U of T, he was a postdoctoral researcher in the Robust Robotics Group at MIT. Dr. Kelly received his PhD degree in 2011 from the University of Southern California, under the supervision of Prof. Gaurav Sukhatme. He was supported at USC in part by an Annenberg Fellowship. Prior to graduate school, he was a software engineer at the Canadian Space Agency in Montreal, Canada. His research interests lie primarily in the areas of sensor fusion, estimation, and machine learning for navigation and mapping, applied to both robots and human-centred assistive technologies.

 

March 28, Grace Gao, UIUC Robust Navigation: From UAVs to Robot Swarms

Robust navigation is critical and challenging for the ever-growing applications of robotics. Take Unmanned Aerial Vehicles (UAVs) as an example: the boom in applications of low-cost multi-copters requires UAVs to navigate in urban environments at low altitude. Traditionally, a UAV is equipped with a GPS receiver for outdoor flight. It may suffer from GPS signal blockage and multipath issues, making GPS-based positioning erroneous or unavailable. Moreover, GPS signals are vulnerable against attacks, such as jamming or spoofing. These attacks either disable GPS positioning, or more deliberately mislead the UAV with wrong positioning.

In this talk, we present our recent work on robust UAV navigation. We deeply fuse GPS information with Lidar, camera vision and inertial measurements on the raw signal level. In addition, we turn the unwanted multipath signals into an additional useful signal source. Instead of one GPS receiver, we use multiple receivers either on the same UAV platform or across a wide area to further improve navigation accuracy, reliability and resilience to attacks.

The second part of the talk will address our work on navigating a swarm of 100 robots, designed and built in our lab. We call them “Shinerbots,” because they are inspired by the schooling behaviors of Golden Shiner Fish. We will demonstrate the successful navigation and environment exploration of our Shinerbot swarm.

 

Bio: Grace Xingxin Gao is an assistant professor in the Aerospace Engineering Department at University of Illinois at Urbana-Champaign. She obtained her Ph.D. degree in Electrical Engineering from the GPS Laboratory at Stanford University. Prof. Gao has won a number of awards, including RTCA William E. Jackson Award and Institute of Navigation Early Achievement Award. She was named one of 50 GNSS Leaders to Watch by the GPS World Magazine. She has won Best Paper/Presentation of the Session Awards 11 times at ION GNSS+ conferences. She received Dean's Award for Excellence in Research from College of Engineering, University of Illinois at Urbana-Champaign. For her teaching, Prof. Gao has been on the List of Teachers Ranked as Excellent by Their Students at University of Illinois multiple times. She won the College of Engineering Everitt Award for Teaching Excellence at University of Illinois at Urbana-Champaign in 2015. She was chosen as American Institute of Aeronautics and Astronautics (AIAA) Illinois Chapter’s Teacher of the Year in 2016.

 

POSTPONED DUE TO SNOWMarch 14, Julie Shah, MIT Enhancing Human Capability with Intelligent Machine Teammates

Every team has top performers -- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. In this talk I share recent work investigating effective ways to blend the unique decision-making strengths of humans and machines. I discuss the development of computational models that enable machines to efficiently infer the mental state of human teammates and thereby collaborate with people in richer, more flexible ways. Our studies demonstrate statistically significant improvements in people’s performance on military, healthcare and manufacturing tasks, when aided by intelligent machine teammates.
 

Bio: Julie Shah is an Associate Professor of Aeronautics and Astronautics at MIT and director of the Interactive Robotics Group, which aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. As a current fellow of Harvard University's Radcliffe Institute for Advanced Study, she is expanding the use of human cognitive models for artificial intelligence. She has translated her work to manufacturing assembly lines, healthcare applications, transportation and defense. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. Prof. Shah has been recognized by the National Science Foundation with a Faculty Early Career Development (CAREER) award and by MIT Technology Review on its 35 Innovators Under 35 list. Her work on industrial human-robot collaboration was also in Technology Review’s 2013 list of 10 Breakthrough Technologies. She has received international recognition in the form of best paper awards and nominations from the ACM/IEEE International Conference on Human-Robot Interaction, the American Institute of Aeronautics and Astronautics, the Human Factors and Ergonomics Society, the International Conference on Automated Planning and Scheduling, and the International Symposium on Robotics. She earned degrees in aeronautics and astronautics and in autonomous systems from MIT.

 

March 7, Lex Fridman, MIT Human-in-the-Loop: Deep Learning for Shared Autonomy in Naturalistic Driving (video)

Localization, mapping, perception, control, and trajectory planning are components of autonomous vehicle design that each have seen considerable progress in the previous three decades and especially since the first DARPA Robotics Challenge. These are areas of robotics research focused on perceiving and interacting with the external world through outward facing sensors and actuators. However, semi-autonomous driving is in many ways a human-centric activity where the at-times distracted, irrational, drowsy human may need to be included in-the-loop of safe and intelligent autonomous vehicle operation through the driver state sensing, communication, and shared control. In this talk, I will present deep neural network approaches for various subtasks of supervised vehicle autonomy with a special focus on driver state sensing and how those approaches helped us in (1) the collection, analysis, and understanding of human behavior over 100,000 miles and 1 billion video frames of on-road semi-autonomous driving in Tesla vehicles and (2) the design of real-time driver assistance systems that bring the human back into the loop of safe shared autonomy.
 

Bio: Lex Fridman is a postdoc at MIT, working on computer vision and deep learning approaches in the context of self-driving cars with a human-in-the-loop. His work focuses on large-scale, real-world data, with the goal of building intelligent systems that have real world impact. Lex received his BS, MS, and PhD from Drexel University where he worked on applications of machine learning, computer vision, and decision fusion techniques in a number of fields including robotics, active authentication, activity recognition, and optimal resource allocation on multi-commodity networks. Before joining MIT, Lex was at Google working on deep learning and decision fusion methods for large-scale behavior-based authentication. Lex is a recipient of a CHI-17 best paper award.

 

Feb 28, Wolfram Burgard, University of Freiburg Deep Learning for Robot Navigation and Perception (video)

Autonomous robots are faced with a series of learning problems to optimize their behavior. In this presentation I will describe recent approaches developed in my group based on deep learning architectures for different perception problems including object recognition and segmentation and using RGB(-D) images. In addition, I will present a terrain classification approaches that utilize sound and vision. For all approaches I will describe expensive experiments quantifying in which way the corresponding approaches extend the state of the art.
 

Bio: Wolfram Burgard is a professor for computer science at the University of Freiburg and head of the research lab for Autonomous Intelligent Systems. His areas of interest lie in artificial intelligence and mobile robots. His research mainly focuses on the development of robust and adaptive techniques for state estimation and control. Over the past years Wolfram Burgard and his group have developed a series of innovative probabilistic techniques for robot navigation and control. They cover different aspects such as localization, map-building, SLAM, path-planning, exploration, and several other aspects. Wolfram Burgard coauthored two books and more than 300 scientific papers. In 2009, Wolfram Burgard received the Gottfried Wilhelm Leibniz Prize, the most prestigious German research award. In 2010, Wolfram Burgard received an Advanced Grant of the European Research Council. Since 2012, Wolfram Burgard is the coordinator of the Cluster of Excellence BrainLinks-BrainTools funded by the German Research Foundation. Wolfram Burgard is Fellow of the ECCAI, the AAAI, and the IEEE.

 

Feb 14, Sumeet Singh, Stanford University, Robust and Risk-Sensitive Planning via Contraction Theory and Convex Optimization

A key prerequisite for autonomous robots working alongside humans is the ability to cope with uncertainty at two levels: (1) low-level modeling errors or external disturbances, and (2) high-level uncertainty about the humans’ goals and actions. For the first part of this talk, I will present our framework for the online generation of robust motion plans for constrained nonlinear robotic systems such as UAVs subject to bounded disturbances while operating in cluttered environments. Specifically, by leveraging tools from contraction theory and convex optimization, we are able to provide a guaranteed margin of safety (i.e., a precise buffer zone) for any desired trajectory, thereby guaranteeing the safe, collision-free execution of the resulting motion plan. Having addressed this robust low-level control strategy, in the second part of the talk I will discuss our recent work on risk- and ambiguity- sensitive Inverse Reinforcement Learning for better capturing human decision making. In particular, by departing from the ubiquitous expected utility framework and proposing a flexible model using coherent risk metrics, we are able to capture an entire spectrum of risk preferences from risk-neutral to worst-case. This allows us to better predict the human decision making process, both qualitatively and quantitatively. We envision that leveraging such a methodology is an important step toward more reliable high- and low- level control processes for safety-critical robotics systems operating in shared environments.
 

Bio: Sumeet Singh is a Ph.D. candidate in Aeronautics and Astronautics at Stanford University. He received a B.Eng. in Mechanical Engineering and a Diploma of Music (Performance) from University of Melbourne in 2012, and a M.Sc. in Aeronautics and Astronautics from Stanford University in 2015. Prior to joining Stanford, Sumeet worked in the Berkeley Micromechanical Analysis and Design lab at University of California Berkeley in 2011 and the Aeromechanics Branch at NASA Ames in 2013. Sumeet’s current research interests are twofold: 1) Robust motion planning for constrained nonlinear systems, and 2) Risk-sensitive Model Predictive Control (MPC). Within the first topic, Sumeet is investigating the design of nonlinear control algorithms for online generation of robust motion plans with guaranteed margins of safety for constrained robotic systems in cluttered environments. The second topic focuses on the development and analysis of stochastic MPC algorithms for robust and risk-sensitive decision making problems.

 

Fall 2016 Robotics Seminars

Dec 13, Amanda Prorok, University of Pennsylvania, Diversity, Privacy, and Resilience in Robot Networks (video, CSAIL only)

We are witnessing a profusion of networked robotic platforms with distinct features and unique capabilities. To realize the full potential of such networked robotic systems, we need to leverage heterogeneity and complementarity through collaborative mechanisms. However, as connections are established, information is shared, and dependencies are created, these systems give rise to new vulnerabilities and threats.

 

To motivate the central questions of diversity, privacy, and resilience, I begin by presenting my experimental work on collaborative positioning with networked teams of robots. As the need for system-wide protection mechanisms becomes evident, I introduce a privacy model that quantifies how much is revealed to external observers about critical robotic entities and their specific interactions. My focus then shifts to the question of how to provide resilience through precautionary collaboration mechanisms, allowing robot teams to function in the presence of defective and/or malicious robots. Finally, I address the question of how to formalize diversity in the context of heterogeneous robot teams, with insights that pertain to performance.

 

Bio: Amanda Prorok is currently a Postdoctoral Researcher in the General Robotics, Automation, Sensing and Perception (GRASP) Laboratory at the University of Pennsylvania, where she works with Prof. Vijay Kumar on heterogeneous networked robotic systems. She completed her PhD at EPFL, Switzerland, where she addressed the topic of localization with ultra-wideband sensing for robotic networks. Her dissertation was awarded the Asea Brown Boveri (ABB) award for the best thesis at EPFL in the fields of Computer Sciences, Automatics and Telecommunications. She was selected as an MIT Rising Star in 2015, and won a Best Paper Award at BICT 2015.

Dec 6, Erion Plaku, Catholic University of America), From High-Level Tasks to Robot Motions: Combined Task and Motion Planning as Hybrid Search over Discrete and Continuous Spaces (video)

As robots are deployed into less and less structured environments, it becomes increasingly important to enhance their ability to complete high-level tasks with little or no human intervention. Whether the task is to search, inspect, or navigate to target destinations, it generally involves decomposing it into discrete, logical actions, where each discrete action often requires complex collision-free and dynamically-feasible motions in order to be implemented. This talk will discuss our research efforts on a computationally-efficient framework and a formal treatment of the combined task and motion- planning problem as search over a hybrid space consisting of discrete and continuous components. The framework makes it possible to specify high-level tasks via Finite State Machines, Linear Temporal Logic, and Planning-Domain Definition Languages and automatically computes collision-free and dynamically-feasible motions that enable the robot to accomplish the assigned task. Applications in autonomous underwater vehicles will be highlighted.

 

Bio: Erion Plaku is an Associate Professor in the Department of Electrical Engineering and Computer Science at Catholic University of America. He received his Ph.D. degree in Computer Science from Rice University in 2008. He was a Postdoctoral Fellow in the Laboratory for Computational Sensing and Robotics at Johns Hopkins University during 2008—2010. Plaku's research is in Robotics and Artificial Intelligence, focusing on enhancing automation in human-machine cooperative tasks in complex domains, such as mobile robotics, autonomous underwater vehicles, and hybrid systems. His research is supported by NSF Intelligent Information Systems, NSF Software Infrastructure, and the U.S. Naval Research Laboratory. More information, including publications, research projects, open-source software he has developed for robot motion planning, and educational materials can be found at http://www.robotmotionplanning.org

Nov 29 - Alex Herzog, Max-Planck Institute for Intelligent Systems, Momentum-centric whole-body control and kino-dynamic motion generation for floating-base robots (video)

Humanoid robots with torque control capabilities are becoming increasingly available in our research community. These robots allow for an explicit control of contact interactions which have the potential to allow robots locomote through difficult terrains. In order to accomplish tasks under balance and contact constraints, whole-body planning and control strategies are required to generate motion and force commands for all limbs efficiently.

 

Model-based control in combination with numerical optimization is becoming a reliable tool for efficient control of complex tasks on floating-base robots. In the first part of my talk I will discuss hierarchical inverse dynamics, a control framework that allows for the composition of complex behaviors from a hierarchy of simpler tasks and constraints. We use cascades of quadratic programs to resolve task hierarchies into joint torques in a 1kHz feedback-loop on our torque controlled humanoid. In our experiments we control the momentum of the robot embedded into a hierarchy of tasks and constraints leading to robust push recovery on our robot.

 

In the second part of my talk I will discuss our kino-dynamic motion generation approach for the full body. We solve an optimization program to obtain whole-body joint and contact force trajectories over a horizon that consider the full robot dynamics and contact constraints. We decompose this non-convex optimization problem into two better structured mathematical programs that are solved iteratively with better informed solvers. Our analysis reveals structure in the centroidal momentum dynamics of floating-base robots that leads to new efficient solvers on the full humanoid model. We can improve the speed of naive off-the shelf solvers by an order of magnitude and phrase direct shooting methods on centroidal dynamics with linear time complexity.

Nov 22 - Marc Toussaint, Uni Stuttgart, Challenges in Robotic AI (video)

There recently is, again, substantial optimism about AI. While I welcome and share the general enthusiasm, I believe that the great advances in machine learning and data-driven methods alone cannot solve fundamental problems in real-world robotic AI. A core challenge remains to capture and formalize essential structure in real-world decision making and manipulation problems, and thereby provide the foundation for sample-efficient learning. In this talk I will discuss three concrete pieces of work in this context: autonomously exploring the environment to learn what is manipulable and how; learning manipulation skills from few demonstrations; and learning sequential manipulation and cooperative assembly from demonstration. All three applications raise fundamental challenges especially w.r.t. the problem formulation and thereby guide us in what we think are interesting research questions to progress the field towards robotic AI.

 

Bio: Marc Toussaint is full professor for Machine Learning and Robotics at the University of Stuttgart since 2012. Before that he was assistant professor at the Free University Berlin, leading an Emmy Noether research group at TU Berlin, and spend two years as a post-doc at the University of Edinburgh. His research focuses on the combination of decision theory and machine learning, motivated by research questions in robotics. Reoccurring themes in his research are appropriate representations (symbols, temporal abstractions, relational representations) to enable efficient learning and manipulation in real world environments, and how to achieve jointly geometric, logic and probabilistic learning and reasoning. He currently is coordinator of the German research priority programme on Autonomous Learning, member of the editorial board of the Journal of AI Research (JAIR), reviewer for the German Research Foundation, and programme committee member of several top conferences in the field (UAI, R:SS, ICRA, IROS, AIStats, ICML). His work was awarded best paper at R:SS'12, ICMLA'07 and runner up at UAI'08.

Nov 15 - Inna Sharf, McGill University, Towards Greater Autonomy and Safety of UAVs: Recovering from Collisions

Making small unmanned aerial vehicles more autonomous is a continuing endeavour in the UAV research community; it is also the focus of Sharf’s research. In this context, her group has been working on problems of state estimation, localization and mapping, system integration and controller design for multicopters and indoor blimps. Following a brief overview of past research projects, this presentation will focus on current work on the development of collision recovery controllers for quadcopters. The collision dynamics model and post collision response characterization of the quadrotor are presented, followed by their experimental validation. A collision recover pipeline is proposed to allow propeller protected quadrotors to recover from a collision. This pipeline includes collision detection, impact characterization and aggressive attitude control. The strategy is validated via a comprehensive Monte Carlo simulation of collisions against a wall, showing the feasibility of recovery from challenging collision scenarios. The pipeline is implemented on a custom quadrotor platform, demonstrating feasibility of real-time performance and successful recovery from a range of pre-collision conditions. The ultimate goal is to implement a general collision recovery solution to further advance the autonomy and safety of quadrotor vehicles. 

 

Bio: Dr. Inna Sharf is a professor in the Department of Mechanical Engineering at McGill University, Montreal, Canada. She received her B.A.Sc. in Engineering Science from the University of Toronto (1D at the Institute for Aerospace Studies, University of Toronto (1991). Prior to relocating to McGill in 2001, she was on faculty with the Department of Mechanical Engineering at the University of Victoria. Sharf’s research activities are in the areas of dynamics and control with applications to space robotic systems, unmanned aerial vehicles and legged robots. Sharf has published over 150 conference and journal papers on her academic research. She is an associate fellow of AIAA and a member of IEEE.

Nov 8 - Sertac Karaman, MIT, Control of High-performance Autonomous Vehicle and Their Systems (video)

In this talk, we present control and planning algorithms for autonomous vehicles that deliver high performance. In the first part of the talk, we focus on control problems for vehicle-level autonomy, i.e., for controlling a single vehicle with complex dynamics to execute complex tasks. Specifically, we introduce novel algorithms that construct arbitrarily good solutions for stochastic optimal control problems. We show that their running time scales linearly with dimension and polynomially with the rank of the optimal cost-to-go function, breaking the curse of dimensionality for low-rank problems. Our results are enabled by a novel continuous analogue of the well-known tensor-train decomposition. We demonstrate the new algorithms on a simulated perching problem, where the computational savings reach ten orders of magnitude when compared to naive approaches, such as value iteration on a grid. In the second part of the talk, we focus on system-level autonomy, i.e., problems that concern systems that consist of several autonomous vehicles. Specifically, we present results on optimal coordination of vehicles passing through an intersection. We reduce the problem to a polling system, under mild technical conditions. We show that the resulting system provides orders of magnitude improvement in delay, when compared to conventional traffic light systems.

 

Bio: Sertac Karaman is an Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology. He has obtained B.S. degrees in mechanical engineering and in computer engineering from the Istanbul Technical University, Turkey, in 2007; an S.M. degree in mechanical engineering from MIT in 2009; and a Ph.D. degree in electrical engineering and computer science also from MIT in 2012. His research interests lie in the broad areas of robotics and control theory. In particular, he studies the applications of probability theory, stochastic processes, stochastic geometry, formal methods, and optimization for the design and analysis of high-performance cyber-physical systems. The application areas of his research include driverless cars, unmanned aerial vehicles, distributed aerial surveillance systems, air traffic control, certification and verification of control systems software, and many others. He is the recipient of an Army Research Office Young Investigator Award in 2015, National Science Foundation Faculty Career Development (CAREER) Award in 2014, AIAA Wright Brothers Graduate Award in 2012, and an NVIDIA Fellowship in 2011.  

Nov 1 - George Konidaris, Brown University, Robot Motion Planning on a Chip (video)

Despite decades of research, real-time, general-purpose robot motion planning remains beyond our reach. A solution to this problem is critical to dealing with natural environments that are not carefully controlled or designed, and our inability to solve it is a major obstacle preventing the widespread deployment of robots in the workplace and the home. I will describe recent research that aims to solve the real-time motion planning problem through the use of a specialized hardware: a custom processor designed solely and specifically to perform motion planning, capable of finding plans for interesting robots in less than one millisecond, while consuming less than 15 watts. I will describe the design of this processor, the research questions and trade-offs that that design induces, and the new potential capabilities created by the ability to find thousands of plans per second. (Collaborative research with Sean Murray, Will Floyd-Jones, Ying Qi, and Dan Sorin, all of Duke University.) 

 

Bio: George Konidaris is an Assistant Professor of Computer Science at Brown. Before joining Brown, he was on the faculty at Duke, and a postdoctoral researcher at MIT. George holds a PhD in Computer Science from the University of Massachusetts Amherst, an MSc in Artificial Intelligence from the University of Edinburgh, and a BScHons in Computer Science from the University of the Witwatersrand. He is the recent recipient of Young Faculty Awards from DARPA and the AFOSR. 

Oct 25 - Alberto Rodriguez, MIT Feedback Control of the Pusher-Slider: A Story of Hybrid and Underactuated Contact Dynamics

In this talk I'll discuss ideas and ongoing work on real-time control strategies for dynamical systems that involve frictional contact interactions. Hybridness and underactuation are key characteristics of these systems that complicate the design of feedback controllers. I'll discuss these challenges and possible control solutions in the context of the pusher-slider system, where the purpose is to control the motion of an object sliding on a flat surface using a point pusher. The pusher-slider is a classical simple dynamical system with many of the challenges present in robotic manipulation tasks: noisy planar sliding friction, instability, hybridness, underactuation, ... I like to call it the simplest but still interesting problem in manipulation, a sort of "inverted pendulum" for robotic manipulation. I'll start the talk by briefing on recent work in my group's participation in the Amazon Picking Challenge, and motivate the need for closed-loop control in grasping and manipulation.

 

Bio: Alberto Rodriguez is the Walter Henry Gale (1929) Career Development Professor at the Mechanical Engineering Department at MIT. Alberto graduated in Mathematics ('05) and Telecommunication Engineering ('06) from the Universitat Politecnica de Catalunya (UPC) in Barcelona, and earned his PhD in Robotics (’13) from the Robotics Institute at Carnegie Mellon University. After spending a year in the Locomotion group at MIT, he joined  the faculty at MIT in 2014, where he started the Manipulation and Mechanisms Lab (MCube). Alberto is the recipient of the Best Student Paper Awards at conferences RSS 2011 and ICRA 2013 and Best Paper finalist at IROS 2016.  His main research interests are in robotic manipulation, mechanical design, and automation. His long-term research goal is to provide robots with enough sensing, reasoning and acting capabilities to reliably manipulate their environment.

 

Oct 18 - Tom Howard, University of Rochester Learning Models for Robot Decision Making

The efficiency and optimality of robot decision making is often dictated by the fidelity and complexity of models for how a robot can interact with its environment.  It is common for researchers to engineer these models a priori to achieve particular levels of performance for specific tasks in a restricted set of environments and initial conditions.  As we progress towards more intelligent systems that perform a wider range of objectives in a greater variety of domains, the models for how robots make decisions must adapt to achieve, if not exceed,  engineered levels of performance.  In this talk I will discuss progress towards model adaptation for robot intelligence, including recent efforts in natural language understanding for human-robot interaction.

 

Bio: Thomas Howard is an assistant professor in the Department of Electrical and Computer Engineering and the Department of Computer Science at the University of Rochester.  He is also a member of the Georgen Institute for Data Science and holds a secondary appointment in the Department of Biomedical Engineering. Previously he held appointments as a research scientist and a postdoctoral associate at MIT's Computer Science and Artificial Intelligence Laboratory in the Robust Robotics Group, a research technologist at the Jet Propulsion Laboratory in the Robotic Software Systems Group, and a lecturer in mechanical engineering at Caltech.

 

Howard earned a PhD in robotics from the Robotics Institute at Carnegie Mellon University in 2009 in addition to BS degrees in electrical and computer engineering and mechanical engineering from the University of Rochester in 2004. His research interests span artificial intelligence, robotics, and human-robot interaction with particular research focus on improving the optimality, efficiency, and fidelity of models for decision making in complex and unstructured environments with applications to robot motion planning and natural language understanding.  He has applied his research on numerous robots including planetary rovers, autonomous automobiles, mobile manipulators, robotic torsos, and unmanned aerial vehicles.  Howard was a member of the flight software team for the Mars Science Laboratory, the motion planning lead for the JPL/Caltech DARPA Autonomous Robotic Manipulation team, and a member of Tartan Racing, winner of the DARPA Urban Challenge.

 

Oct 4 - Brenna Argall, Northwestern University Human Autonomy through Robotics Autonomy

It is a paradox that often the more severe a person's motor impairment, the more assistance they require and yet the less able they are to operate the very assistive machines created provide this assistance. A primary aim of my lab is to address this confound by incorporating robotics autonomy and intelligence into assistive machines---to offload some of the control burden from the user. Robots already synthetically sense, act in and reason about the world, and these technologies can be leveraged to help bridge the gap left by sensory, motor or cognitive impairments in the users of assistive machines. However, here the human-robot team is a very particular one: the robot is physically supporting or attached to the human, replacing or enhancing lost or diminished function. In this case getting the allocation of control between the human and robot right is absolutely essential, and will be critical for the adoption of physically assistive robots within larger society. This talk will overview some of the ongoing projects and studies in my lab, whose research lies at the intersection of artificial intelligence, rehabilitation robotics and machine learning. We are working with a range of hardware platforms, including smart wheelchairs and assistive robotic arms. A distinguishing theme present within many of our projects is that the machine automation is customizable---to a user's unique and changing physical abilities, personal preferences or even financial means.

 

Bio: Brenna Argall is the June and Donald Brewer Junior Professor of Electrical Engineering & Computer Science at Northwestern University, and also an assistant professor in the Department of Mechanical Engineering and the Department of Physical Medicine & Rehabilitation. Her research lies at the intersection of robotics, machine learning and human rehabilitation. She is director of the assistive & rehabilitation robotics laboratory (argallab) at the Rehabilitation Institute of Chicago (RIC), the premier rehabilitation hospital in the United States, and her lab's mission is to advance human ability through robotics autonomy. Argall is a 2016 recipient of the NSF CAREER award. Her Ph.D. in Robotics (2009) was received from the Robotics Institute at Carnegie Mellon University, as well as her M.S. in Robotics (2006) and B.S. in Mathematics (2002). Prior to joining Northwestern, she was a postdoctoral fellow (2009-2011) at the École Polytechnique Fédérale de Lausanne (EPFL), and prior to graduate school she held a Computational Biology position at the National Institutes of Health (NIH).

 

Sep 20 - Dorsa Sadigh, UC Berkeley Towards a Theory of Human-Cyber-Physical Systems

The goal of my research is to enable safe and reliable integration of Human-Cyber-Physical Systems (h-CPS) in our society by providing a unified framework for modeling and design of these systems. Today’s society is rapidly advancing towards CPS that interact and collaborate with humans, e.g., semiautonomous vehicles interacting with drivers and pedestrians, medical robots used in collaboration with doctors, or service robots interacting with their users in smart homes. The safety-critical nature of these systems requires us to provide provably correct guarantees about their performance. I aim to develop a formalism for design of algorithms and mathematical models that enable correct-by-construction control and verification of h-CPS. 

 

In this talk, I will focus on two natural instances of this agenda. I will first talk about interaction-aware control, where we use algorithmic HRI to be mindful of the effects of autonomous systems on humans, and further leverage these effects for better safety and efficiency. I will then talk about providing correctness guarantees while taking into account the uncertainty arising from the environment. Through this effort, I will introduce Probabilistic Signal Temporal Logic (PrSTL), an expressive specification language that allows representing Bayesian graphical models as part of its predicates. Then, I will provide a solution for synthesizing controllers that satisfy PrSTL specifications, and further discuss a diagnosis and repair algorithm for systematic transfer of control to the human in unrealizable settings. While the algorithms and techniques introduced can be applied to many h-CPS applications, in this talk, I will focus on the implications of my work for semiautonomous driving.

 

Bio: Dorsa Sadigh is a Ph.D. candidate in the Electrical Engineering and Computer Sciences department at UC Berkeley. Her research interests lie in the intersection of control theory, formal methods and human-robot interactions. Specifically, she works on developing a unified framework for safe and reliable design of  human-cyber-physical systems. Dorsa received her B.S. from Berkeley EECS in 2012. She was awarded the NDSEG and NSF graduate research fellowships in 2013. She was the recipient of the 2016 Leon O. Chua department award and the 2011 Arthur M. Hopkin department award for achievement in the field of nonlinear science, and she received the Google Anita Borg Scholarship in 2016.

 

Spring 2016 Campus-wide Robotics Seminar (sponsored by Aurora Flight Sciences , The MathWorks , and the Russell Sage Foundation )  (11am-noon in 32-G449)

 

May 10 - Ed Olson, Michigan Reliable robots: failing without failing (video)

Everything about a robot is unreliable: sensors lie, state estimators compute poor means and variances, and actuators slip and slide. It is too much to ask these systems to be 100% reliable, but then how do we build incredibly reliable systems that can operate for 100 million miles between serious mishap, or those that can inhabit a house alongside people without occasionally running over the cats?

 

In this talk, I describe two different approaches that allow robots to tolerate failures, moving us away from the need for 100% reliability. The first is an probabilistic inference system (Max-Mixtures) that allows us to model non-Gaussian sensor failures. Max-Mixtures can be used to unify outlier rejection and state estimation or to do inference when sensor data is multi-modal, but they are nearly as fast as ordinary least squares methods. The second approach is a planning approach (Multi-Policy Decision Making, MPDM) that allows a robot to introspectively choose between multiple ways of performing a task, selecting the more reliable approach. For example, a robot might choose to visually servo towards a target instead of trajectory planning through a 3D model acquired from LIDAR. In short, the robot does the easy dumb thing when it can, and resorts to the complex thing when it must.

 

Apr 26 - Seth Hutchinson, UIUC Robust Distributed Control Policies for Multi-Robot Systems (video)

In this talk, I will describe our recent progress in developing fault-tolerant distributed control policies for multi-robot systems. We consider two problems: rendezvous and coverage. For the former, the goal is to bring all robots to a common location, while for the latter the goal is to deploy robots to achieve optimal coverage of an environment. We consider the case in which each robot is an autonomous decision maker that is anonymous, memoryless, and dimensionless, i.e., robots are indistinguishable to one another, make decisions based upon only current information, and do not consider collisions. Each robot has a limited sensing range, and is able to directly estimate the state of only those robots within that sensing range, which induces a network topology for the multi-robot system. We assume that it is not possible for the fault-free robots to identify the faulty robots (e.g., due to the anonymous property of the robots). For each problem, we provide an efficient computational framework and analysis of algorithms, all of which converge in the face of faulty robots under a few assumptions on the network topology and sensing abilities.

 

Bio: Seth Hutchinson received his Ph.D. from Purdue University in 1988. In 1990 he joined the faculty at the University of Illinois in Urbana-Champaign, where he is currently a Professor in the Department of Electrical and Computer Engineering, the Coordinated Science Laboratory, and the Beckman Institute for Advanced Science and Technology. He served as Associate Department Head of ECE from 2001 to 2007. He currently serves on the editorial boards of the International Journal of Robotics Research and the Journal of Intelligent Service Robotics, and chairs the steering committee of the IEEE Robotics and Automation Letters. He was Founding Editor-in-Chief of the IEEE Robotics and Automation Society's Conference Editorial Board (2006-2008), and Editor-in-Chief of the IEEE Transaction on Robotics (2008-2013). He has published more than 200 papers on the topics of robotics and computer vision, and is coauthor of the books "Principles of Robot Motion: Theory, Algorithms, and Implementations," published by MIT Press, and "Robot Modeling and Control," published by Wiley. Hutchinson is a Fellow of the IEEE.

 

April 12 - Sidd Srinivasa, CMU Physics-based Manipulation (video)

Humans effortlessly push, pull, and slide objects, fearlessly reconfiguring clutter, and using physics and the world as a helping hand. But most robots treat the world like a game of pick-up-sticks: avoiding clutter and attempting to rigidly grasp anything they want to move. I'll talk about some of our ongoing efforts at harnessing physics for nonprehensile manipulation, and the challenges of deploying our algorithms on real physical systems. I'll specifically focus on whole-arm manipulation, state estimation for contact manipulation, and on closing the feedback loop on nonprehensile manipulation.

 

April 7 - Jianxiong Xiao, Princeton Three Design Principles for Robust Robot Perception (video)

Recent years have witnessed tremendous progress in the development of autonomous machines. Autonomous cars have driven over millions of miles, and robots now regularly perform tasks too dangerous or monotonous for human beings. Yet despite these advancements, robots continue to remain highly dependent on human operators and carefully designed environments. In one prominent example, the DARPA Robotics Challenge asked dozens of participating robots to complete tasks in a mock disaster response scenario. But all teams, lacking confidence in their robot?s ability to reliably perceive its surroundings, opted to outsource most perception to humans. Team KAIST, the eventual winner, "found that the most (actually all) famous algorithms are not very effective in real situations."
 
In this talk, I will address the endeavor of bridging the gap between computer vision and robot perception, summarizing my experiences in three design principles. First, I will argue that it is crucial for the algorithms to fully operate end-to-end in three-dimensions, establishing the grounds for the area of "3D Deep Learning". I will demonstrate this idea on object detection, view planning, and mapping in a personal robotics scenario. Second, I will highlight the importance of direct perception in estimating affordances for a robot's actions, demonstrating the idea in an autonomous driving application. Third, I will propose the design of robot systems with failure modes of perception in mind, allowing for pitfall avoidance and an extremely high level of robustness. Finally, going beyond perception, I will briefly mention some ongoing works in Big Data Robotics, Robot Learning, and Human Robot Collaboration.

 

Bio: Jianxiong Xiao is an Assistant Professor in the Department of Computer Science at Princeton University. He received his Ph.D. from the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT) in 2013. Before that, he received a BEng. and MPhil. in Computer Science from the Hong Kong University of Science and Technology in 2009. His research focuses on bridging the gap between computer vision and robotics by building extremely robust and dependable computer vision systems for robot perception. In particular, he is interested in 3D Deep Learning, RGB-D Recognition and Mapping, Deep Learning for Robotics, Autonomous Driving, Big Data Robotics, and Robot Learning. His work has received the Best Student Paper Award at the European Conferenceon Computer Vision (ECCV) in 2012 and the Google Research Best Papers Award for 2012, and has appeared in the popular press. Jianxiong was awarded the Google U.S./Canada Fellowship in Computer Vision in 2012, the MIT CSW Best Research Award in 2011, and two Google Faculty Awards in 2014 and in 2015 respectively. More information can be found at http://vision.princeton.edu.

 

April 5 - Kostas Bekris, Rutgers Algorithmic Tradeoffs in Robot Motion Planning (video)

Roboticists have addressed increasingly complicated motion planning challenges over the last decades. A popular paradigm related to this achievement corresponds to sampling and graph-based solutions, for which the conditions to achieve asymptotic optimality have been recently identified. In this domain, we have contributed a study on the practical properties of these planners after finite computation time and how sparse representations can guarantee to efficiently return near-optimal solutions. We have also proposed the first method that achieves asymptotic optimality for kinodynamic planning without access to a steering function, which can impact high-dimensional belief space planning. After reviewing these contributions, this talk will discuss recent work on manipulation task planning challenges. In particular, we will present a methodology for efficiently rearranging multiple similar objects using a robotic arm. The talk will conclude on how such algorithmic progress together with technological developments bring the hope of reliably deploying robots in important applications, ranging from space exploration to warehouse automation and logistics. 

 

Bio: Kostas Bekris is an Assistant Professor of Computer Science at Rutgers University. He completed his PhD degree in Computer Science at Rice University, Houston, TX, under the supervision of Prof. Lydia Kavraki.  He was Assistant Professor at the University of Nevada, Reno between 2008 and 2012. He works in robotics and his interests include motion planning, especially for systems with dynamics, manipulation, online replanning, motion coordination, and applications in cyber-physical systems and simulations. His research group has been supported by NSF, NASA (Early CAREER Faculty award), DHS, DoD and the NY/NJ Port Authority.

 

March 29 - Emma Brunskill, CMU Learning to Make Good Decisions in Noisy, Stochastic, Costly Domains (video)

A critical aspect of human intelligence is the ability to learn to make good decisions. Achieving similar behavior in artificial agents is a key focus in AI, and could have enormous benefits, particularly in applications like education and healthcare where autonomous agents could help people expand their capacity and reach their potential. But tackling such domains requires approaches that can handle the noisy, stochastic, costly decisions that characterize interacting with people. In this talk I will describe some of our recent work in pursuing this agenda. One key focus has been on offline policy evaluation, how to use old data to estimate the performance of different strategies, and I will discuss a new estimator that can yield orders of magnitude smaller mean squared estimates. I will also describe how problems like transfer learning and partially observable reinforcement learning can be framed as instances of latent variable modeling for control, and enable new sample complexity results for these settings. Our advances in these topics have enabled us to obtain more engaging educational games and better news recommendations.

Bio: Emma Brunskill is an assistant professor of computer science and an affiliate professor of machine learning at Carnegie Mellon University. She is a Rhodes Scholar, a Microsoft Faculty Fellow, a NSF CAREER awardee and a ONR Young Investigator Program recipient. Her work has been recognized with best paper nominations at the Educational Data Mining conference (2012,2013) and the Computer Human Interaction conference (2014), and a best paper award at the Reinforcement Learning and Decision Making Conference (2015).

 

March 8 - Cynthia Sung, MIT Computational Tools for Robot Design: A Composition Approach (video)

As robots become more prevalent in society, they must develop an ability to deal with more diverse situations. This ability entails customizability of not only software intelligence, but also of hardware. However, designing a functional robot remains challenging and often involves many iterations of design and testing even for skilled designers. My goal is to create computational tools for making functional machines, allowing future designers to quickly improvise new hardware.

 

In this talk, I will discuss one possible approach to automated design using composition. I will describe our origami-inspired print-and-fold process that allows entire robots to be fabricated within a few hours, and I will demonstrate how foldable modules can be composed together to create foldable mechanisms and robots. The modules are represented parametrically, enabling a small set of modules to describe a wide range of geometries and also allowing geometries to be optimized in a straightforward manner. I will also introduce a tool that we have developed that combines this composition approach with simulations to help human designers of all skill levels to design and fabricate custom functional robots.

 

Bio: Cynthia Sung is a Ph.D. candidate working with Prof. Daniela Rus in the Computer Science and Artificial Intelligence Laboratory at the
Massachusetts Institute of Technology (MIT). She received a B.S. in Mechanical Engineering from Rice University in 2011 and an M.S. in
Electrical Engineering and Computer Science from MIT in 2013. Cynthia is a recipient of the NDSEG and NSF graduate fellowships. Her research
interests include computational design, folding theory, and rapid fabrication, and her current work focuses on algorithms for synthesis
and analysis of engineering designs.

 

March 1 - Kris Hauser, Duke Motion Planning for Real World Robots (video)

Motion planning – the problem of computing physical actions to complete a specified task – is a fundamental problem in robotics, and has inspired some of the most rigorous and beautiful theoretical results in robotics research. But as robots proliferate in real-world applications like household service, driverless cars, warehouse automation, minimally-invasive surgery, search-and-rescue, and unmanned aerial vehicles, we are beginning to see the classical theory falter in light of the new reality of modern robotics practice. Today’s robots must handle large amounts of noisy sensor data, uncertainty, underspecified models, nonlinear and hysteretic dynamic effects, exotic objective functions and constraints, and real-time demands. This talk will present recent efforts to bring motion planners to bear on real robots, along four general directions 1) improving planning algorithm performance, 2) broadening the scope of problems that can be addressed by planners, 3) incorporating richer, higher fidelity models into planning, and 4) improved workflows for integrating planners into robot systems. This research is applied to a variety of systems, including ladder climbing in the DARPA Robotics Challenge, the Duke rock-climbing robot project, semiautonomous mobile manipulators, and object manipulation in the Amazon Picking Challenge. 

 

Bio: Kris Hauser is an Associate Professor at the Pratt School of Engineering at Duke University with a joint appointment in the Electrical and Computer Engineering Department and the Mechanical Engineering and Materials Science Department. He received his PhD in Computer Science from Stanford University in 2008, bachelor's degrees in Computer Science and Mathematics from UC Berkeley in 2003, and worked as a postdoctoral fellow at UC Berkeley. He then joined the faculty at Indiana University from 2009-2014, where he started the Intelligent Motion Lab, and began his current position at Duke in 2014. He is a recipient of a Stanford Graduate Fellowship, Siebel Scholar Fellowship, Best Paper Award at IEEE Humanoids 2015, and an NSF CAREER award. 

Research interests include robot motion planning and control, semiautonomous robots, and integrating perception and planning, as well as applications to intelligent vehicles, robotic manipulation, robot-assisted medicine, and legged locomotion. 

 

February 23 - Patrick Wensing, MIT MechE Control Design for Legged Robots: Physical Principles Enabling Dynamic Mobility (video)

Abstract: Recent technological advances have given way to a new generation of versatile legged robots. These machines are envisioned to replace first responders in disaster scenarios and enable unmanned exploration of distant planets. To achieve these aims, however, our robots must be able to manage physical interaction through contact to move through unstructured terrain. This talk reports on the development of control systems for legged robots to achieve unprecedented levels of dynamic mobility by addressing many critical problems for contact interaction with the environment. Drawing on key insights from biomechanics, the talk will open with a description of optimization-based balance control algorithms for high-speed locomotion in humanoid robots. It will then present design features of the MIT Cheetah 2 quadruped robot that enable dynamic locomotion in experimental hardware. A model predictive control framework for this robot will be described which enables the Cheetah to autonomously jump over obstacles with a maximum height of 40 cm (80% of leg length) while running at 2.5 m/s. Across these results, dynamic physical interaction with the environment is exploited, rather than avoided, to achieve new levels of performance.

 

February 16 - Adam Bry, Skydio Algorithms and challenges in scaling up autonomous flight (video)

Drones hold enormous potential for consumer video, inspection, mapping, monitoring, and perhaps even delivery. They’re also natural candidates for autonomy and likely to be among the first widely-deployed systems that incorporate meaningful intelligence based on computer vision and robotics research. In this talk I’ll discuss the trajectory of hobbies, research, and work that led me to start Skydio. I’ll cover some of the algorithms developed during my research at MIT which culminated in a fixed-wing vehicle that could navigate obstacles at high speeds. I’ll also present some of the work that we’ve done at Skydio in motion planning and perception, along with the challenges involved in building a robust robotics software system that needs to work at scale.

 

Bio: Adam Bry is co-founder and CEO of Skydio, a venture backed drone startup based in the bay area. Prior to Skydio he helped start Project Wing at Google[x] where he worked on the flight algorithms and software. He holds a SM in Aero/Astro from MIT and a BS in Mechanical Engineering from Olin College. Adam grew up flying radio controlled airplanes and is a former national champion in precision aerobatics.

http://www.skydio.com/

 

February 9 - Rob Wood, Harvard Manufacturing, actuation, sensing, and control for robotic insects

As the characteristic size of a flying robot decreases, the challenges for successful flight revert to basic questions of fabrication, actuation, fluid mechanics, stabilization, and power -- whereas such questions have in general been answered for larger aircraft. When developing a robot on the scale of a housefly, all hardware must be developed from scratch as there is nothing "off-the-shelf" which can be used for mechanisms, sensors, or computation that would satisfy the extreme mass and power limitations. With these challenges in mind, this talk will present progress in the essential technologies for insect-scale robots and the latest flight experiments with robotic insects.

http://micro.seas.harvard.edu

 

December 15 - Metin Sitti, Max Planck Institute Mobile Microrobotics (no video)

Untethered mobile microrobots have the unique capability of accessing to small spaces and scales directly. Due to their small size and micron-scale physics and dynamics, they could be agile and portable, and could be inexpensive and in large numbers if they are mass-produced. Mobile microrobots would have high impact applications in health-care, bioengineering, mobile sensor networks, desktop micromanufacturing, and inspection. In this presentation, mobile microrobots from few micrometers up to hundreds of micrometer overall sizes and various locomotion capabilities are presented.  Going down to micron scale, one of the grand challenges for mobile microrobots is miniaturization limitation on on-board actuation, powering, sensing, processing, and communication components.  Two alternative approaches are explored in this talk to solve the actuation and powering challenges.  First, biological cells, e.g. bacteria, attached to the surface of a synthetic microrobot are used as on-board microactuators and microsensors using the  chemical energy inside or outside the cell in physiological fluids. Bacteria-propelled randomly microswimmers are steered using chemical and pH gradients in the environment and remote magnetic fields towards future targeted drug delivery and environmental remediation applications. As the second approach, external actuation of untethered magnetic microrobots using remote magnetic fields in enclosed spaces is demonstrated. New magnetic microrobot locomotion principles based on rotational stick-slip and rolling  dynamics are proposed. Novel magnetic composite materials are used to address and control teams of microrobots and to create novel soft actuators and programmable soft matter. Untethered microrobot teams are demonstrated to manipulate live cells and microgels with embedded cells for bioengineering applications, and to self-assemble into different patterns with remote magnetic control.

 

Bio: Metin Sitti received the BSc and MSc degrees in electrical and electronics engineering from Bogazici University, Istanbul, Turkey, in 1992 and 1994, respectively, and the PhD degree in electrical engineering from the University of Tokyo, Tokyo, Japan, in 1999. He was a research scientist at UC Berkeley during 1999-2002. He is currently a director in Max-Planck Institute for Intelligent Systems and a professor in Department of Mechanical Engineering and Robotics Institute at Carnegie Mellon University. His research interests include small-scale physical intelligence, mobile microrobots, bio-inspired millirobots, smart and soft micro/nanomaterials, and programmable self-assembly. He is an IEEE Fellow. He received the SPIE Nanoengineering Pioneer Award in 2011 and NSF CAREER Award in 2005. He received the IEEE/ASME Best Mechatronics Paper Award in 2014, the Best Poster Award in the Adhesion Conference in 2014, the Best Paper Award in the IEEE/RSJ International Conference on Intelligent Robots and Systems in 2009 and 1998, the first prize in the World RoboCup Micro-Robotics Competition in 2012 and 2013, the Best Biomimetics Paper Award in the IEEE Robotics and Biomimetics Conference in 2004, and the Best Video Award in the IEEE Robotics and Automation Conference in 2002. He is the editor-in-chief of Journal of Micro-Bio Robotics.

 

December 8 - Art Kuo, University of Michigan Robot vs. Human: The Next Round of Legged Locomotion Battles (video, CSAIL only)

An enduring myth in the world of legged locomotion is that a robot should model itself upon human. The human presents a standard for performance, and a recipe for control strategy, and a blueprint for design. Not only is that myth false, but it has also (fortunately) been ignored. To date, robot locomotion has benefitted from humans and animals, and the understanding of them, only in how many legs to have. The reason is that hardware technology is presently far from making truly human-like locomotion possible, or even a good idea. This raises the question of what the next generation of legged robots should try to be. The correct answer is anything but humans, but even to achieve that means there is reason to understand humans. I will demonstrate a few unique ways that humans walk dynamically, and how they are optimal for humans and therefore suboptimal for robots. From a biomechanical perspective, I will muse on some interesting challenges for future robots that will act more dynamically, and one day perhaps even approach the standard set by humans.

 

Bio: Art Kuo is Professor of Mechanical Engineering and Biomedical Engineering at the University of Michigan. He directs the Human Biomechanics and Control Laboratory, which studies the basic principles of locomotion and other movements, and applies those principles to the development of robotic, assistive, and therapeutic devices to aid humans. Current interests include walking and running on uneven terrain, development of wearable sensors to track foot motion in the wild, and devices to improve the economy of locomotion in the impaired.

 

December 1 - Louis Whitcomb, Johns Hopkins Nereid Under-Ice: A Remotely Operated Underwater Robotic Vehicle for Oceanographic Access Under Ice

This talk reports recent advances in underwater robotic vehicle research  to enable novel oceanographic operations in extreme ocean environments, with focus on two recent novel vehicles developed by a team comprised of the speaker and his collaborators at the Woods Hole Oceanographic Institution. First, the development and operation of the Nereus underwater robotic vehicle will be briefly described, including successful scientific observation and sampling dive operations at hadal depths of 10,903 m. on a NSF sponsored expedition to the Challenger Deep of the Mariana Trench – the deepest place on Earth. Second, development and first sea trials of the new Nereid Under-Ice (UI) underwater vehicle will be described. NUI is a novel remotely-controlled underwater robotic vehicle capable of being teleoperated under ice under remote real-time human supervision. We report the results of NUI’s first under-ice deployments during a July 2014 expedition aboard R/V Polarstern at 83° N 6 W° in the Arctic Ocean – approximately 200 km NE of Greenland. 

 

Bio: Louis L. Whitcomb is Professor and Chairman at the Department of Mechanical Engineering, with secondary appointment in Computer Science, at the Johns Hopkins University’s Whiting School of Engineering. He completed a B.S. in Mechanical Engineering in 1984 and a Ph.D. in Electrical Engineering in 1992 at Yale University. From 1984 to 1986 he was a Research and Development engineer with the GMFanuc Robotics Corporation in Detroit, Michigan. He joined the Department of Mechanical Engineering at the Johns Hopkins University in 1995, after post doctoral fellowships at the University of Tokyo and the Woods Hole Oceanographic Institution. His research focuses on the navigation, dynamics, and control of robot systems – including industrial, medical, and underwater robots. Whitcomb is a principal investigator of the Nereus and Nereid Under-Ice Projects. He is former (founding) Director of the JHU Laboratory for Computational Sensing and Robotics. He received teaching awards at Johns Hopkins in 2001, 2002, 2004, and 2011, was awarded a National Science Foundation Career Award, and an Office of Naval Research Young Investigator Award. He is a Fellow of the IEEE. He is also Adjunct Scientist, Department of Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution.

 

November 24 - Liam Paull, MIT CSAIL A Cooperative Area Coverage Framework that Accounts for Uncertainty and its Application to Autonomous Seabed Surveying (video)

In this talk, we investigate the area coverage problem with mobile robots whose localization uncertainty is time-varying and significant.
The vast majority of literature on robotics area coverage assumes that the robot's location estimate error is either zero or at least bounded.
We remove this assumption and develop a probabilistic representation of coverage. Once we have formally connected robot sensor uncertainty with the area coverage, we motivate an adaptive sliding window filter pose estimator that is able to provide an arbitrarily close approximate to the full maximum a posteriori estimation with a computation cost that does not grow with time. An adaptive planning strategy is also presented that is able to automatically exploit conditions of low vehicle uncertainty to more aggressively cover area in realtime. This results in faster progress towards the coverage goal than overly conservative planners that assume worst-case error at all times. 

We further extend this to the multi-robot case where robots are able to communicate through a (possibly faulty) channel and make relative measurements of one another. In this case, area coverage can be achieved more quickly since the uncertainty of the robot trajectories will be reduced. We apply the framework to the scenario of mapping an area of seabed with a autonomous marine vehicles for minehunting purposes. The results show that the vehicles are able to achieve complete coverage with high confidence notwithstanding poor navigational sensors and resulting path-lengths are shorter than the worst-case planners.

 

November 17 - Sami Haddadin, Hannover Robots For Humans (video)

Enabling robots for direct physical interaction and cooperation with humans and potentially unknown environments has been one of robotics research primary goals over decades. I will outline how our work on human-centered robot design, control, and planning may let robots for humans become a commodity in our near-future society. For this, we developed new generations of impedance controlled ultra-lightweight robots possibly equipped with Variable Impedance Actuation, previously at DLR, now in my new lab, which are sought to safely act as human assistants and collaborators at high performance over a variety of application domains. These may e.g. involve industrial assembly and manufacturing, medical assistance, or healthcare helpers in everyone's home, but also neurally controlled assistive devices. A recent generation of lightweight robots was commercialized as the KUKA LBR iiwa, which is considered to be the first commercial representative of this new class of robots. Based on a smart mechatronics design, a robot (let it be a manipulator, humanoid or flying system) has to be quipped with and also learn the skills than enable it to perceive and manipulate its' surrounding. Furthermore, it shall deduct according actions for successfully carrying out its given task, possibly in close collaboration with humans. At the same time the primary objective of a robot's action around humans is to ensure that even in case of malfunction or user errors no human shall be harmed, neither its surrounding be damaged. For this, instantaneous, truly human-safe, and intelligent context based force-sensitive controls and reactions to unforeseen events, partly inspired by the human motor control system, become crucial. 

 

Bio: Sami Haddadin is Full Professor and Director of the Institute of Automatic Control (IRT) at Leibniz University Hanover (LUH), Germany. Until 2014 he was Scientific Coordinator "Terrestrial Assistance Systems" and "Human-Centered Robotics" at the DLR Robotics and Mechatronics Center. He was a visiting scholar at Stanford University in 2011 and a consulting scientist of Willow Garage, Inc., Palo Alto until 2013. He received degrees in Electrical Engineering (2006), Computer Science (2009), and Technology Management (2008) from TUM and LMU, respectively. He obtained his PhD with summa cum laude from RWTH Aachen in 2011. His research topics include physical Human-Robot Interaction, nonlinear robot control, real-time motion planning, real-time task and reflex planning, robot learning, optimal control, human motor control, variable impedance actuation, neuro-prosthetics, and safety in robotics. He was in the program/organization committee of several international robotics conferences and a guest editor of IJRR. He is an associate editor of the IEEE Transactions on Robotics. He published more than 100 scientific articles in international journals, conferences, and books. He received five best paper and video awards at ICRA/IROS, the 2008 Literati Best Paper Award, the euRobotics Technology Transfer Award 2011, and the 2012 George Giralt Award. He won the IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award in 2011 and 2013. He is a recipient of the 2015 IEEE/RAS Early Career Award, the 2015 RSS Early Career Spotlight, the 2015 Alfried Krupp Award for Young Professors and was selected as 2015 Capital Young Elite Leader under 40 in Germany for the domain "Politics, State & Society".

 

November 10 - Dmitry Berenson, WPI Toward General-Purpose Manipulation of Deformable Objects (video)

Imagine a robot that could perceive and manipulate rigid objects as skillfully as a human adult. Would a robot that had such amazing capabilities be able to perform the range of practical manipulation tasks we expect in settings such as the home? Consider that this robot would still be unable to prepare a meal, do laundry, or make a bed because these tasks involve deformable object manipulation. Unlike in rigid-body manipulation, where methods exist for general-purpose pick-and-place tasks regardless of the size and shape of the object, no such methods exist for a similarly broad and practical class of deformable object manipulation tasks. The problem is indeed challenging, as these objects are not straightforward to model and have infinite-dimensional configuration spaces, making it difficult to apply established motion planning approaches. Our approach seeks to bypass these difficulties by representing deformable objects using simplified geometric models at both the global and local planning levels. Though we cannot predict the state of the object precisely, we can nevertheless perform tasks such as cable-routing, cloth folding, and surgical probe insertion in geometrically-complex environments. Building on this work, our new projects in this area aim to blend exploration of the model space with goal-directed manipulation of deformable objects and to generalize the methods we have developed to motion planning for soft robot arms, where we can exploit contact to mitigate the actuation uncertainty inherent in these systems.

 

Bio: Dmitry Berenson received a BS in Electrical Engineering from Cornell University in 2005 and received his Ph.D. degree from the Robotics Institute at Carnegie Mellon University in 2011, where he was supported by an Intel PhD Fellowship. He completed a post-doc at UC Berkeley in 2011 and started as an Assistant Professor in Robotics Engineering and Computer Science at WPI in 2012. He founded and directs the Autonomous Robotic Collaboration (ARC) Lab at WPI, which focuses on motion planning, manipulation, and human-robot collaboration.

 

November 3 - Andrea Censi, MIT  Everything is the Same: Monotone Co-Design Problems (video)

I will present some recent work towards developing a "theory of co-design" that is rich enough to represent the trade-offs in the design of complex robotic systems, including the recursive constraints that involve energetics, propulsion, communication, computation, sensing, control, perception, and planning. I am developing a formalism in which "design problems" are the primitive objects, and multiple design problems can be composed to obtain "co-design" problems through operations analogous to series, parallel, and feedback composition. Certain monotonicity properties are preserved by these operations, from which it is possible to conclude existence and uniqueness of minimal feasible design trade-offs, as well as obtaining a systematic solution procedure. The mathematical tools used are the *really elementary* parts of the theory of fixed points on partially ordered sets (Kleene, Tarski, etc), of which no previous knowledge is assumed.  We will conclude that: choosing the smallest battery for a drone, optimizing your controller to work over a network of limited bandwidth, and defining the semantics of programming languages, are one and the same problem.

 

October 27 - Aaron Steinfeld, CMU  Understanding and Creating Appropriate Robot Behavior (video)

End users expect appropriate robot actions, interventions, and requests for human assistance. As with most technologies, robots that behave in unexpected and inappropriate ways face misuse, abandonment, and sabotage. Complicating this challenge are human misperceptions of robot capability, intelligence, and performance. This talk will summarize work from several projects focused on this human-robot interaction challenge. Findings and examples will be shown from work on human trust in robots, deceptive robot behavior, robot motion, robot characteristics, and interaction with humans who are blind. I will also describe some lessons learned from related work in crowdsourcing (e.g., Tiramisu Transit) to help inform methods for enabling and supporting contributions by end users and local experts.

Bio: Aaron Steinfeld is an Associate Research Professor in the Robotics Institute (RI) at Carnegie Mellon University. He received his BSE, MSE, and Ph.D. degrees in Industrial and Operations Engineering from the University of Michigan and completed a Post Doc at U.C. Berkeley. He is the Co-Director of the Rehabilitation Engineering Research Center on Accessible Public Transportation (RERC-APT), Director of the DRRP on Inclusive Cloud and Web Computing, and the area lead for transportation related projects in the Quality of Life Technology Center (QoLT). His research focuses on operator assistance under constraints, i.e., how to enable timely and appropriate interaction when technology use is restricted through design, tasks, the environment, time pressures, and/or user abilities. His work includes intelligent transportation systems, crowdsourcing, human-robot interaction, rehabilitation, and universal design.

 

October 22 - David Held, Stanford University Using Motion to Understand Objects in the Real World (no video)

Many robots today are confined to operate in relatively simple, controlled environments. One reason for this is that current methods for processing visual data tend to break down when faced with occlusions, viewpoint changes, poor lighting, and other challenging but common situations that occur when robots are placed in the real world. I will show that we can train robots to handle these variations by inferring the causes behind visual appearance changes. If we model how the world changes over time, we can be robust to the types of changes that objects often undergo. I demonstrate this idea in the context of autonomous driving, and I show how we can use this idea to improve performance on three different tasks: velocity estimation, segmentation, and tracking with neural networks. By inferring the causes of appearance changes over time, we can make our methods more robust to a variety of challenging situations that commonly occur in the real-world, thus enabling robots to come out of the factory and into our lives.

 

Bio: David Held is a Computer Science Ph.D. student at Stanford working with Sebastian Thrun and Silvio Savarese. He research interests include robotics, vision, and machine learning, with applications to tracking and object detection for autonomous driving. David has previously been a researcher at the Weizmann Institute and has worked in industry as a software developer. David has a Master's Degree in Computer Science from Stanford and B.S. and M.S. degrees in Mechanical Engineering from MIT.
 

October 20 - Robotics Student/Faculty Mixer

 

October 7 - Matt Klingensmith, CMU  Articulated SLAM (no video)

Uncertainty is a central problem in robotics. In order to understand and interact with the world, robots need to interpret signals from noisy sensors to reconstruct clear models not only of the world around them, but also their own internal state. For example, a mobile robot navigating an unknown space must simultaneously reconstruct a model of the world around it, and localize itself against that model using noisy sensor data from wheel odometry, lasers, cameras, or other sensors. This problem (called the SLAM problem) is very well-studied in the domain of mobile robots. Less well-studied is the equivalent problem for robot manipulators. That is, given a multi-jointed robot arm with a noisy hand-mounted sensor, how can the robot simultaneously estimate its state and generate a coherent 3D model of the world? We call this the articulated SLAM problem.

Given actuator uncertainty and sensor uncertainty, what is the correct way of simultaneously constructing a model of the world and estimating the robot's state? In this work, we show that certain contemporary visual SLAM techniques can be mapped to the articulated SLAM problem by using the robot's joint configuration space as the state space for localization, rather than the typical SE(3). We map one kind of visual SLAMt technique, Kinect Fusion, to the robot's configuration space, and show how the robot's joint encoders can be used appropriately to inform the pose of the camera. The idea that the configuration of the robot is not merely a sensor which informs the pose of the camera, but rather it is the underlying latent state of the system is critical to our analysis. Tracking the configuration of the robot directly allows us to build algorithms on top of the SLAM system which depend on knowledge of the joint angles (such as motion planners and control systems).

 

 

Spring 2015 Campus-wide Robotics Seminar (sponsored by Aurora Flight Sciences and The MathWorks)  (11am-noon in 1-190)

Seminar series youtube channel

May 12 - Dieter Fox, UW  RGB-D Perception in Robotics

RGB-D cameras provide per pixel color and depth information at high frame rate and resolution. Gaming and entertainment applications such as the Microsoft Kinect system resulted in the mass production of RGB-D cameras at extremely low cost, also making them available for a wide range of robotics applications. In this talk, I will provide an overview of depth camera research done in the Robotics and State Estimation Lab over the last six years. This work includes 3D mapping of static and dynamic scenes, autonomous object modeling and recognition, and articulated object tracking.

 

Bio: Dieter Fox is a Professor in the Department of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. From 2009 to 2011, he was also Director of the Intel Research Labs Seattle. He currently serves as the academic PI of the Intel Science and Technology Center for Pervasive Computing hosted at UW. Dieter obtained his Ph.D. from the University of Bonn, Germany. Before going to UW, he spent two years as a postdoctoral researcher at the CMU Robot Learning Lab. Fox's research is in artificial intelligence, with a focus on state estimation applied to robotics and activity recognition. He has published over 150 technical papers and is co-author of the text book "Probabilistic Robotics". He is an IEEE and a AAAI fellow, and received several best paper awards at major robotics and AI conferences. He is an editor of the IEEE Transactions on Robotics, was program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and served as the program chair of the 2013 Robotics: Science and Systems conference.

May 5 - Russ Tedrake, MIT - CSAIL  MIT's Entry in the DARPA Robotics Challenge: Real-world, Interactive-rate Optimization for Humanoid Robots

On June 5-6 of this year, 25 of the most advanced robots in the world will descend on Pomona, California to compete in the final DARPA Robotics Challenge competition (http://theroboticschallenge.org). Each of these robots will be sent into a disaster response situation to perform complex locomotion and manipulation tasks with limited power and comms. Team MIT is one of only 2 academic teams that has survived all of the qualifying rounds, and we are working incredibly hard to showcase the power of our relatively formal approaches to perception, estimation, planning, and control.

 

In this talk, I’ll dig into a number of technical research nuggets that have come to fruition during this effort, including an optimization-based planning and control method for robust and agile online gait and manipulation planning, efficient mixed-integer optimization for negotiating rough terrain, convex relaxations for grasp optimization, powerful real-time perception systems, and essentially drift-free state estimation. I’ll discuss the formal and practical challenges of fielding these on a very complex (36+ degree of freedom) humanoid robot that absolutely has to work on game day.

Relevant URLs: http://drc.mit.edu, http://youtube.com/mitdrc

Apr 28 - Ioannis Poulakakis, University of Delaware  Legged Robots Across Scales: Integrating Motion Planning and Control through Canonical Locomotion Models

Abstract: On a macroscopic level, legged locomotion can be understood through reductive canonical models -- often termed templates -- the purpose of which is to capture the dominant features of an observed locomotion behavior without delving into the fine details of a robot’s (or animal’s) structure and morphology. Such models offer unifying, platform-independent, descriptions of task-level behaviors, and inform control design for legged robots. This talk will discuss reductive locomotion models for diverse legged robots, ranging from slow-moving, palm-size, eight-legged crawlers to larger bipeds and quadrupeds, and will focus on the role of such models in integrating locomotion control and motion planning within a unifying framework that translates task-level specifications to suitable low-level control actions that harness the locomotion capabilities of the robot platforms.

 

Bio: Prof. Poulakakis earned his Ph.D. in Electrical Engineering from the University of Michigan in 2008, served as a postdoctoral research associate at Princeton University for two years, and then joined the Department of Mechanical Engineering at the University of Delaware in 2010 as an Assistant Professor. His research interests are in the area of dynamics and control with application to bio-inspired robotic systems, specifically legged robots. In 2014 he received a Faculty Early Career Development Award from the National Science Foundation to investigate task planning and motion control for legged robots at different scales.

Apr 21 - No seminar - MIT MONDAY SCHEDULE (due to Patriots Day)

Apr 14 - Ted Adelson, MIT  GelSight sensors for high resolution touch sensing in robotics, and many other things

GelSight is a technology for high resolution touch sensing, which has a wide range of applications, some unexpected. A sensor consists of a slab of clear elastomer covered with a reflective membrane, along with an embedded camera and light system. The goal was to build a robot fingertip that could match the softness and sensitivity of human skin. Using machine vision (mainly photometric stereo) one can touch a surface and quickly derive high resolution 3D geometry, allowing estimates of shape, texture, and force. By adding internal markers one can estimate tangential interactions (friction, shear and slip). With collaborators we are learning how to use this information in robotic manipulation and surface sensing. GelSight’s extraordinarily high resolution has also led to a spin-off company, GelSight Inc., which makes instruments that measure the micron scale 3D geometry. Variants are being used commercially to support 3D printing, to enable forensics on bullet casings, to study human skin, and (in a large version) to measure feet for custom insoles.

Apr 7 - Allison Okamura, Stanford University Department of Mechanical Engineering  Modeling, Planning, and Control for Robot-Assisted Medical Interventions

Abstract: Many medical interventions today are qualitatively and quantitatively limited by human physical and cognitive capabilities. This talk will discuss several robot-assisted intervention techniques that will extend humans' ability to carry out interventions more accurately and less invasively. First, I will describe the development of minimally invasive systems that deliver therapy by steering needles through deformable tissue and around internal obstacles to reach specified targets. Second, I will review recent results in haptic (touch) feedback for robot-assisted teleoperated surgery, in particular the display of tissue mechanical properties. Finally, I will demonstrate the use of dynamic models of the body to drive novel rehabilitation strategies. All of these systems incorporate one or more key elements of robotic interventions: (1) quantitative descriptions of patient state, (2) the use of models to plan interventions, (3) the design of devices and control systems that connect information to physical action, and (4) the inclusion of human input in a natural way.

 

Biosketch: Allison M. Okamura received the BS degree from the University of California at Berkeley in 1994, and the MS and PhD degrees from Stanford University in 1996 and 2000, respectively, all in mechanical engineering. She is currently an Associate Professor in the mechanical engineering department at Stanford University, with a courtesy appointment in Computer Science. She is Editor-in-Chief of the IEEE International Conference on Robotics and Automation and an IEEE Fellow. Her academic interests include haptics, teleoperation, virtual and augmented reality, medical robotics, neuromechanics and rehabilitation, prosthetics, and engineering education. Outside academia, she enjoys spending time with her husband and two children, running, and playing ice hockey. For more information about her research, please see the Collaborative Haptics and Robotics in Medicine (CHARM) Laboratory website: http://charm.stanford.edu.

Mar 31 - student/faculty mixer

Mar 24 - MIT SPRING VACATION Special : Frank Dellaert, Georgia Tech  Factor Graphs for Flexible Inference in Robotics and Vision

Abstract: Simultaneous Localization and Mapping (SLAM) and Structure from Motion (SFM) are important and closely related problems in robotics and vision. I will show how both SLAM and SFM instances can be posed in terms of a graphical model, a factor graph, and that inference in these graphs can be understood as variable elimination. The overarching theme of the talk will be to emphasize the advantages and intuition that come with seeing these problems in terms of graphical models. For example, while the graphical model perspective is completely general, linearizing the non-linear factors and assuming Gaussian noise yields the familiar direct linear solvers such as Cholesky and QR factorization. Based on these insights, we have developed both batch and incremental algorithms defined on graphs in the SLAM/SFM domain. I will also discuss my recent work on using polynomial bases for trajectory optimization, inspired by pseudospectral optimal control, which is made easy by the new Expressions language in GTSAM 4, currently under development.

Bio: Frank Dellaert is currently on leave from the Georgia Institute of Technology for a stint as Chief Scientist of Skydio, a startup founded by MIT grads to create intuitive interfaces for micro-aerial vehicles. When not on leave, he is a Professor in the School of Interactive Computing and Director of the Robotics PhD program at Georgia Tech. His research interests lie in the overlap of Robotics and Computer vision, and he is particularly interested in graphical model techniques to solve large-scale problems in mapping and 3D reconstruction. You can find out about his group’s research and publications at https://borg.cc.gatech.edu and http://www.cc.gatech.edu/~dellaert. The GTSAM toolbox which embodies many of the ideas his group has worked on in the past few years is available for download at http://tinyurl.com/gtsam. But really hardcore users can ask to be plugged into our BitBucket motherlode. Just send mail to frank@cc.gatech.edu.

Mar 17 - Leslie Pack Kaelbling, MIT CSAIL  Making Robots Behave

The fields of AI and robotics have made great improvements in many individual subfields, including in motion planning, symbolic planning, probabilistic reasoning, perception, and learning. Our goal is to develop an integrated approach to solving very large problems that are hopelessly intractable to solve optimally. We make a number of approximations during planning, including serializing subtasks, factoring distributions, and determinizing stochastic dynamics, but regain robustness and effectiveness through a continuous state-estimation and replanning process. This approach is demonstrated in three robotic domains, each of which integrates perception, estimation, planning, and manipulation.

Mar 10 - Hanu Singh, Woods Hole Oceanographic Institute  Bipolar Robotics: Exploring the Arctic and the Antarctic with a stop for some Coral Reef Ecology in the Middle

The Arctic and Antarctic remain one of least explored parts of the world's oceans. This talk looks at efforts over the last decade to explore areas under-ice which have traditionally been difficult to access. The focus of the talk will be on the robots, the role of communications over low bandwidth acoustic links, navigation and imaging and mapping methodologies. This issues will all be discussed within the context of real data collected on several expeditions to the Arctic and Antarctic.
http://www.whoi.edu/DSL/hanu
http://www.whoi.edu/oceanus/feature/the-jetyak
http://polardiscovery.whoi.edu/expedition2/index.html

Mar 3 - Brandon Basso, UC Berkeley  The 3D Robotics Open UAV Platform

3D Robotics is a venture-backed aerospace startup in Berkeley, California. At the heart of our platform is the the Pixhawk autopilot which runs on more UAVs in the world than any other autopilot and represents the worlds largest open source robotics project, Ardupilot. This talk will explore the technological advancements that have enabled an entirely open and viral UAV platform, from low-level estimation to high-level system architecture. Two recent advancements will be explored in detail: Efficient algorithms for state estimation using low-cost IMUs, and cloud-based architecture for real-time uplink and downlink from any internet-connected vehicle. Some concluding thoughts on future platform evolution and the growing consumer drone space will be presented.

Feb 24 - Russell H. Taylor, The Johns Hopkins University  Medical Robotics and Computer-Integrated Interventional Medicine

Computer-integrated interventional systems (CIIS) combine innovative algorithms, robotic devices, imaging systems, sensors, and human-machine interfaces to work cooperatively with surgeons in the planning and execution of surgery and other interventional procedures. The impact of CIIS on medicine in the next 20 years will be as great as that of Computer-Integrated Manufacturing on industrial production over the past 20 years. A novel partnership between human surgeons and machines, made possible by advances in computing and engineering technology, will overcome many of the limitations of traditional surgery. By extending human surgeons’ ability to plan and carry out surgical interventions more accurately and less invasively, CIIS systems will address a vital need to greatly reduce costs, improve clinical outcomes, and improve the efficiency of health care delivery.
This talk will describe past and emerging research themes in CIIS systems and illustrate them with examples drawn from our current research activities within Johns Hopkins University’s Engineering Research Center for Computer Integrated Surgical systems and Technology

Biography

Russell H. Taylor received his Ph.D. in Computer Science from Stanford in 1976. He joined IBM Research in 1976, where he developed the AML robot language and managed the Automation Technology Department and (later) the Computer-Assisted Surgery Group before moving in 1995 to Johns Hopkins, where he is the John C. Malone Professor of Computer Science with joint appointments in Mechanical Engineering, Radiology, and Surgery and is also Director of the Engineering Research Center for Computer-Integrated Surgical Systems and Technology (CISST ERC) and of the Laboratory for Computational Sensing and Robotics (LCSR). He is the author of over 375 peer-reviewed publications, a Fellow of the IEEE, of the AIMBE, of the MICCAI Society, and of the Engineering School of the University of Tokyo. He is also a recipient of numerous awards, including the IEEE Robotics Pioneer Award, the MICCAI Society Enduring Impact Award, and the Maurice Müller Award for Excellence in Computer-Assisted Orthopaedic Surgery.


Fall 2014 Campus-wide Robotics Seminar

Dec 9 - Tim Bretl, U Illinois Urbana-Champaign   Mechanics, Manipulation, and Perception of an Elastic Rod (video)

Abstract: This talk is about robotic manipulation of canonical "deformable linear objects" like a Kirchhoff elastic rod (e.g., a flexible wire). I continue to be amazed by how much can be gained by looking carefully at the mechanics of these objects and at the underlying mathematics. For example, did you know that the free configuration space of an elastic rod is path-connected? I'll prove it, and tell you why it matters.

Bio: Timothy Bretl comes from the University of Illinois at Urbana-Champaign, where he is an Associate Professor of Aerospace Engineering and of the Coordinated Science Laboratory.

Website: http://bretl.csl.illinois.edu/

Photo: http://goo.gl/F7BpMz

Dec 2 - Steve LaValle, Professor, University of Illinois & Principal Scientist, Oculus/Facebook   Robotics Meets Virtual Reality (video)

Abstract: Roboticists are well positioned to strongly impact the rising field of virtual reality (VR). Using the latest technology, we can safely take control of your most trusted senses, thereby fooling your brain into believing you are in another world. VR has been around for a long time, but due to the recent convergence of sensing, display, and computation technologies, there is an unprecedented opportunity to explore this form of human augmentation with lightweight, low-cost materials and simple software platforms. Many of the issues are familiar to roboticists, such as position and orientation tracking from sensor data, maintaining features from vision data, and dynamical system modeling. In addition, there is an intense form of human-computer interaction (HCI) that requires re-examining core engineering principles with a direct infusion of perceptual psychology research. With the rapid rise in consumer VR, fundamental research questions are popping up everywhere, slicing across numerous disciplines from engineering to sociology to film to medicine. This talk will provide some perspective on where we have been and how roboticists can help participate in this exciting future!

Bio: Steve LaValle started working with Oculus VR in September 2012, a few days after their successful Kickstarter campaign, and was the head scientist up until the Facebook acquisition in March 2014. He developed perceptually tuned head tracking methods based on IMUs and computer vision. He also led a team of perceptual psychologists to provide principled approaches to virtual reality system calibration and the design of comfortable user experiences. In addition to his continuing work at Oculus, he is also Professor of Computer Science at the University of Illinois, where he joined in 2001. He has worked in robotics for over 20 years and is known for his introduction of the Rapidly exploring Random Tree (RRT) algorithm of motion planning and his 2006 book, Planning Algorithms.

Website: http://msl.cs.uiuc.edu/~lavalle/

Nov 25 - Richard Newcombe, University of Washington Andrea Censi, MIT LIDS

Robotics video session : screening/voting session for the ICRA 2015 trailer

Nov 18 - Sachin Patil, UC Berkeley   Coping with Uncertainty in Robotic Navigation, Exploration, and Grasping

A key challenge in robotics is to robustly complete navigation, exploration, and manipulation tasks when the state of the world is uncertain. This is a fundamental problem in several application areas such as logistics, personal robotics, and healthcare where robots with imprecise actuation and sensing are being deployed in unstructured environments. In such a setting, it is necessary to reason about the acquisition of perceptual knowledge and to perform information gathering actions as necessary. In this talk, I will present an approach to motion planning under motion and sensing uncertainty called "belief space" planning where the objective is to trade off exploration (gathering information) and exploitation (performing actions) in the context of performing a task. In particular, I will present how we can use trajectory optimization to compute locally-optimal solutions to a determinized version of this problem in Gaussian belief spaces. I will show that it is possible to obtain significant computational speedups without explicitly optimizing over the covariances by considering a partial collocation approach. I will also address the problem of computing such trajectories, given that measurements may not be obtained during execution due to factors such as limited field of view of sensors and occlusions. I will demonstrate this approach in the context of robotic grasping in unknown environments where the robot has to simultaneously explore the environment and grasp occluded objects whose geometry and positions are initially unknown.

Nov 4 - Mark Cutkosky, Stanford   Bio-Inspired Dynamic Surface Grasping (video)

The adhesive system of the gecko has several remarkable properties that make it ideal for agility on vertical and overhanging surfaces. It requires very little preload for sticking, and (unlike sticky tape) very little effort to detach. It resists fouling when the gecko travels over dusty surfaces, and it is controllable: the amount of adhesion in the normal direction depends on the applied tangential force. Moreover, it is fast, allowing the gecko to climb at speeds of a meter per second. The desirable properties of the gecko's adhesive apparatus are a result of its unique, hierarchical structure, with feature sizes ranging from hundreds of nanometers to millimeters. Over the last several years, analogous features have been incorporated into various synthetic gecko-inspired adhesives, with gradually improving performance from the standpoints of adhesion, ease and speed of attachment and detachment, etc. In this talk we will explore recent developments to scale gecko-inspired directional adhesives beyond small wall-climbing robots to new applications including perching quadrotors and grappling space debris in orbit. These applications require scaling the adhesives to areas of 10x10cm or larger on flat or curved surfaces without loss in performance, and attachment in milliseconds to prevent bouncing. The solutions draw some inspiration from the arrangement of tendons and other compliant structures in the gecko's toe.

Oct 28 - Robotics student/faculty mixer

Oct 21 - Anca Dragan, Carnegie Mellon  Interaction as Manipulation (video)

The goal of my research is to enable robots to autonomously produce behavior that reasons about function _and_ interaction with and around people. I aim to develop a formal understanding of interaction that leads to algorithms which are informed by mathematical models of how people interact with robots, enabling generalization across robot morphologies and interaction modalities.

In this talk, I will focus on one specific instance of this agenda: autonomously generating motion for coordination during human-robot collaborative manipulation. Most motion in robotics is purely functional: industrial robots move to package parts, vacuuming robots move to suck dust, and personal robots move to clean up a dirty table. This type of motion is ideal when the robot is performing a task in isolation. Collaboration, however, does not happen in isolation, and demands that we move beyond purely functional motion. In collaboration, the robot's motion has an observer, watching and interpreting the motion – inferring the robot's intent from the motion, and anticipating the robot's motion based on its intent. My work develops a mathematical model of these inferences, and integrates this model into motion planning, so that the robot can generate motion that matches people's expectations and clearly conveys its intent. In doing so, I draw on action interpretation theory, Bayesian inference, constrained trajectory optimization, and interactive learning. The resulting motion not only leads to more efficient collaboration, but also increases the fluency of the interaction as defined through both objective and subjective measures. The underlying formalism has been applied across robot morphologies, from manipulator arms to mobile robots, and across interaction modalities, such as motion, gestures, and shared autonomy with assistive arms.

Oct 14 - Sangbae Kim, MIT The actuation and the control of the MIT Cheetah (video)

Biological machines created by millions of years of evolution suggest a paradigm shift in robotic design. Realizing animals’ magnificent locomotive capabilities is next big challenge in mobile robotic applications. The main theme of MIT Biomimetic Robotics Laboratory is innovation through ‘principle extraction’ from biology. The embodiment of such innovations includes Stickybot that employs the world’s first synthetic directional dry adhesive inspired by geckos, and the MIT Cheetah, designed after the fastest land animal. The design principles in structures, actuation and control algorithms applied in the MIT Cheetah will be presented during the talk. The Kim’s creations are opening new frontiers in robotics and leading to advanced mobile robots that can save lives in dangerous situations, and new all-around robotic transportation systems for the mobility-impaired.

Oct 7 - Nick Roy, MIT  Project Wing: Self-flying vehicles for Package Delivery

Autonomous UAVs, or "self-flying vehicles", hold the promise of transforming a number of industries, and changing how we move things around the world. Building from the foundation of decades of research in autonomy and UAVs, Google launched Project Wing in 2012 and recently announced trials of a delivery service using a small fleet of autonomous UAVs in Australia. In this talk, I will provide an introduction to the work Google has been doing in developing this service, describe the capabilities (and limitations) of the vehicles, and talk briefly about the promise of UAVs in general.