Robotics Seminar

Spring 2016 Campus-wide Robotics Seminar (sponsored by Aurora Flight Sciences , The MathWorks , and the Russell Sage Foundation )  (11am-noon in 32-G449)

 

May 10 - Ed Olson, Michigan Reliable robots: failing without failing

Everything about a robot is unreliable: sensors lie, state estimators compute poor means and variances, and actuators slip and slide. It is too much to ask these systems to be 100% reliable, but then how do we build incredibly reliable systems that can operate for 100 million miles between serious mishap, or those that can inhabit a house alongside people without occasionally running over the cats?

 

In this talk, I describe two different approaches that allow robots to tolerate failures, moving us away from the need for 100% reliability. The first is an probabilistic inference system (Max-Mixtures) that allows us to model non-Gaussian sensor failures. Max-Mixtures can be used to unify outlier rejection and state estimation or to do inference when sensor data is multi-modal, but they are nearly as fast as ordinary least squares methods. The second approach is a planning approach (Multi-Policy Decision Making, MPDM) that allows a robot to introspectively choose between multiple ways of performing a task, selecting the more reliable approach. For example, a robot might choose to visually servo towards a target instead of trajectory planning through a 3D model acquired from LIDAR. In short, the robot does the easy dumb thing when it can, and resorts to the complex thing when it must.

 

Apr 26 - Seth Hutchinson, UIUC Robust Distributed Control Policies for Multi-Robot Systems (video)

In this talk, I will describe our recent progress in developing fault-tolerant distributed control policies for multi-robot systems. We consider two problems: rendezvous and coverage. For the former, the goal is to bring all robots to a common location, while for the latter the goal is to deploy robots to achieve optimal coverage of an environment. We consider the case in which each robot is an autonomous decision maker that is anonymous, memoryless, and dimensionless, i.e., robots are indistinguishable to one another, make decisions based upon only current information, and do not consider collisions. Each robot has a limited sensing range, and is able to directly estimate the state of only those robots within that sensing range, which induces a network topology for the multi-robot system. We assume that it is not possible for the fault-free robots to identify the faulty robots (e.g., due to the anonymous property of the robots). For each problem, we provide an efficient computational framework and analysis of algorithms, all of which converge in the face of faulty robots under a few assumptions on the network topology and sensing abilities.

 

Bio: Seth Hutchinson received his Ph.D. from Purdue University in 1988. In 1990 he joined the faculty at the University of Illinois in Urbana-Champaign, where he is currently a Professor in the Department of Electrical and Computer Engineering, the Coordinated Science Laboratory, and the Beckman Institute for Advanced Science and Technology. He served as Associate Department Head of ECE from 2001 to 2007. He currently serves on the editorial boards of the International Journal of Robotics Research and the Journal of Intelligent Service Robotics, and chairs the steering committee of the IEEE Robotics and Automation Letters. He was Founding Editor-in-Chief of the IEEE Robotics and Automation Society's Conference Editorial Board (2006-2008), and Editor-in-Chief of the IEEE Transaction on Robotics (2008-2013). He has published more than 200 papers on the topics of robotics and computer vision, and is coauthor of the books "Principles of Robot Motion: Theory, Algorithms, and Implementations," published by MIT Press, and "Robot Modeling and Control," published by Wiley. Hutchinson is a Fellow of the IEEE.

 

April 12 - Sidd Srinivasa, CMU Physics-based Manipulation (video)

Humans effortlessly push, pull, and slide objects, fearlessly reconfiguring clutter, and using physics and the world as a helping hand. But most robots treat the world like a game of pick-up-sticks: avoiding clutter and attempting to rigidly grasp anything they want to move. I'll talk about some of our ongoing efforts at harnessing physics for nonprehensile manipulation, and the challenges of deploying our algorithms on real physical systems. I'll specifically focus on whole-arm manipulation, state estimation for contact manipulation, and on closing the feedback loop on nonprehensile manipulation.

 

April 7 - Jianxiong Xiao, Princeton Three Design Principles for Robust Robot Perception  (video)

Recent years have witnessed tremendous progress in the development of autonomous machines. Autonomous cars have driven over millions of miles, and robots now regularly perform tasks too dangerous or monotonous for human beings. Yet despite these advancements, robots continue to remain highly dependent on human operators and carefully designed environments. In one prominent example, the DARPA Robotics Challenge asked dozens of participating robots to complete tasks in a mock disaster response scenario. But all teams, lacking confidence in their robot?s ability to reliably perceive its surroundings, opted to outsource most perception to humans. Team KAIST, the eventual winner, "found that the most (actually all) famous algorithms are not very effective in real situations."
 
In this talk, I will address the endeavor of bridging the gap between computer vision and robot perception, summarizing my experiences in three design principles. First, I will argue that it is crucial for the algorithms to fully operate end-to-end in three-dimensions, establishing the grounds for the area of "3D Deep Learning". I will demonstrate this idea on object detection, view planning, and mapping in a personal robotics scenario. Second, I will highlight the importance of direct perception in estimating affordances for a robot's actions, demonstrating the idea in an autonomous driving application. Third, I will propose the design of robot systems with failure modes of perception in mind, allowing for pitfall avoidance and an extremely high level of robustness. Finally, going beyond perception, I will briefly mention some ongoing works in Big Data Robotics, Robot Learning, and Human Robot Collaboration.

 

Bio: Jianxiong Xiao is an Assistant Professor in the Department of Computer Science at Princeton University. He received his Ph.D. from the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT) in 2013. Before that, he received a BEng. and MPhil. in Computer Science from the Hong Kong University of Science and Technology in 2009. His research focuses on bridging the gap between computer vision and robotics by building extremely robust and dependable computer vision systems for robot perception. In particular, he is interested in 3D Deep Learning, RGB-D Recognition and Mapping, Deep Learning for Robotics, Autonomous Driving, Big Data Robotics, and Robot Learning. His work has received the Best Student Paper Award at the European Conferenceon Computer Vision (ECCV) in 2012 and the Google Research Best Papers Award for 2012, and has appeared in the popular press. Jianxiong was awarded the Google U.S./Canada Fellowship in Computer Vision in 2012, the MIT CSW Best Research Award in 2011, and two Google Faculty Awards in 2014 and in 2015 respectively. More information can be found at http://vision.princeton.edu.

 

April 5 - Kostas Bekris, Rutgers Algorithmic Tradeoffs in Robot Motion Planning (video)

Roboticists have addressed increasingly complicated motion planning challenges over the last decades. A popular paradigm related to this achievement corresponds to sampling and graph-based solutions, for which the conditions to achieve asymptotic optimality have been recently identified. In this domain, we have contributed a study on the practical properties of these planners after finite computation time and how sparse representations can guarantee to efficiently return near-optimal solutions. We have also proposed the first method that achieves asymptotic optimality for kinodynamic planning without access to a steering function, which can impact high-dimensional belief space planning. After reviewing these contributions, this talk will discuss recent work on manipulation task planning challenges. In particular, we will present a methodology for efficiently rearranging multiple similar objects using a robotic arm. The talk will conclude on how such algorithmic progress together with technological developments bring the hope of reliably deploying robots in important applications, ranging from space exploration to warehouse automation and logistics. 

 

Bio: Kostas Bekris is an Assistant Professor of Computer Science at Rutgers University. He completed his PhD degree in Computer Science at Rice University, Houston, TX, under the supervision of Prof. Lydia Kavraki.  He was Assistant Professor at the University of Nevada, Reno between 2008 and 2012. He works in robotics and his interests include motion planning, especially for systems with dynamics, manipulation, online replanning, motion coordination, and applications in cyber-physical systems and simulations. His research group has been supported by NSF, NASA (Early CAREER Faculty award), DHS, DoD and the NY/NJ Port Authority.

 

March 29 - Emma Brunskill, CMU Learning to Make Good Decisions in Noisy, Stochastic, Costly Domains (video)

A critical aspect of human intelligence is the ability to learn to make good decisions. Achieving similar behavior in artificial agents is a key focus in AI, and could have enormous benefits, particularly in applications like education and healthcare where autonomous agents could help people expand their capacity and reach their potential. But tackling such domains requires approaches that can handle the noisy, stochastic, costly decisions that characterize interacting with people. In this talk I will describe some of our recent work in pursuing this agenda. One key focus has been on offline policy evaluation, how to use old data to estimate the performance of different strategies, and I will discuss a new estimator that can yield orders of magnitude smaller mean squared estimates. I will also describe how problems like transfer learning and partially observable reinforcement learning can be framed as instances of latent variable modeling for control, and enable new sample complexity results for these settings. Our advances in these topics have enabled us to obtain more engaging educational games and better news recommendations.

Bio: Emma Brunskill is an assistant professor of computer science and an affiliate professor of machine learning at Carnegie Mellon University. She is a Rhodes Scholar, a Microsoft Faculty Fellow, a NSF CAREER awardee and a ONR Young Investigator Program recipient. Her work has been recognized with best paper nominations at the Educational Data Mining conference (2012,2013) and the Computer Human Interaction conference (2014), and a best paper award at the Reinforcement Learning and Decision Making Conference (2015).

 

March 8 - Cynthia Sung, MIT Computational Tools for Robot Design: A Composition Approach (video)

As robots become more prevalent in society, they must develop an ability to deal with more diverse situations. This ability entails customizability of not only software intelligence, but also of hardware. However, designing a functional robot remains challenging and often involves many iterations of design and testing even for skilled designers. My goal is to create computational tools for making functional machines, allowing future designers to quickly improvise new hardware.

 

In this talk, I will discuss one possible approach to automated design using composition. I will describe our origami-inspired print-and-fold process that allows entire robots to be fabricated within a few hours, and I will demonstrate how foldable modules can be composed together to create foldable mechanisms and robots. The modules are represented parametrically, enabling a small set of modules to describe a wide range of geometries and also allowing geometries to be optimized in a straightforward manner. I will also introduce a tool that we have developed that combines this composition approach with simulations to help human designers of all skill levels to design and fabricate custom functional robots.

 

Bio: Cynthia Sung is a Ph.D. candidate working with Prof. Daniela Rus in the Computer Science and Artificial Intelligence Laboratory at the
Massachusetts Institute of Technology (MIT). She received a B.S. in Mechanical Engineering from Rice University in 2011 and an M.S. in
Electrical Engineering and Computer Science from MIT in 2013. Cynthia is a recipient of the NDSEG and NSF graduate fellowships. Her research
interests include computational design, folding theory, and rapid fabrication, and her current work focuses on algorithms for synthesis
and analysis of engineering designs.

 

March 1 - Kris Hauser, Duke Motion Planning for Real World Robots (video)

Motion planning – the problem of computing physical actions to complete a specified task – is a fundamental problem in robotics, and has inspired some of the most rigorous and beautiful theoretical results in robotics research. But as robots proliferate in real-world applications like household service, driverless cars, warehouse automation, minimally-invasive surgery, search-and-rescue, and unmanned aerial vehicles, we are beginning to see the classical theory falter in light of the new reality of modern robotics practice. Today’s robots must handle large amounts of noisy sensor data, uncertainty, underspecified models, nonlinear and hysteretic dynamic effects, exotic objective functions and constraints, and real-time demands. This talk will present recent efforts to bring motion planners to bear on real robots, along four general directions 1) improving planning algorithm performance, 2) broadening the scope of problems that can be addressed by planners, 3) incorporating richer, higher fidelity models into planning, and 4) improved workflows for integrating planners into robot systems. This research is applied to a variety of systems, including ladder climbing in the DARPA Robotics Challenge, the Duke rock-climbing robot project, semiautonomous mobile manipulators, and object manipulation in the Amazon Picking Challenge. 

 

Bio: Kris Hauser is an Associate Professor at the Pratt School of Engineering at Duke University with a joint appointment in the Electrical and Computer Engineering Department and the Mechanical Engineering and Materials Science Department. He received his PhD in Computer Science from Stanford University in 2008, bachelor's degrees in Computer Science and Mathematics from UC Berkeley in 2003, and worked as a postdoctoral fellow at UC Berkeley. He then joined the faculty at Indiana University from 2009-2014, where he started the Intelligent Motion Lab, and began his current position at Duke in 2014. He is a recipient of a Stanford Graduate Fellowship, Siebel Scholar Fellowship, Best Paper Award at IEEE Humanoids 2015, and an NSF CAREER award. 

Research interests include robot motion planning and control, semiautonomous robots, and integrating perception and planning, as well as applications to intelligent vehicles, robotic manipulation, robot-assisted medicine, and legged locomotion. 

 

February 23 - Patrick Wensing, MIT MechE Control Design for Legged Robots: Physical Principles Enabling Dynamic Mobility (video)

Abstract: Recent technological advances have given way to a new generation of versatile legged robots. These machines are envisioned to replace first responders in disaster scenarios and enable unmanned exploration of distant planets. To achieve these aims, however, our robots must be able to manage physical interaction through contact to move through unstructured terrain. This talk reports on the development of control systems for legged robots to achieve unprecedented levels of dynamic mobility by addressing many critical problems for contact interaction with the environment. Drawing on key insights from biomechanics, the talk will open with a description of optimization-based balance control algorithms for high-speed locomotion in humanoid robots. It will then present design features of the MIT Cheetah 2 quadruped robot that enable dynamic locomotion in experimental hardware. A model predictive control framework for this robot will be described which enables the Cheetah to autonomously jump over obstacles with a maximum height of 40 cm (80% of leg length) while running at 2.5 m/s. Across these results, dynamic physical interaction with the environment is exploited, rather than avoided, to achieve new levels of performance.

 

February 16 - Adam Bry, Skydio Algorithms and challenges in scaling up autonomous flight (video)

Drones hold enormous potential for consumer video, inspection, mapping, monitoring, and perhaps even delivery. They’re also natural candidates for autonomy and likely to be among the first widely-deployed systems that incorporate meaningful intelligence based on computer vision and robotics research. In this talk I’ll discuss the trajectory of hobbies, research, and work that led me to start Skydio. I’ll cover some of the algorithms developed during my research at MIT which culminated in a fixed-wing vehicle that could navigate obstacles at high speeds. I’ll also present some of the work that we’ve done at Skydio in motion planning and perception, along with the challenges involved in building a robust robotics software system that needs to work at scale.

 

Bio: Adam Bry is co-founder and CEO of Skydio, a venture backed drone startup based in the bay area. Prior to Skydio he helped start Project Wing at Google[x] where he worked on the flight algorithms and software. He holds a SM in Aero/Astro from MIT and a BS in Mechanical Engineering from Olin College. Adam grew up flying radio controlled airplanes and is a former national champion in precision aerobatics.

http://www.skydio.com/

 

February 9 - Rob Wood, Harvard Manufacturing, actuation, sensing, and control for robotic insects

As the characteristic size of a flying robot decreases, the challenges for successful flight revert to basic questions of fabrication, actuation, fluid mechanics, stabilization, and power -- whereas such questions have in general been answered for larger aircraft. When developing a robot on the scale of a housefly, all hardware must be developed from scratch as there is nothing "off-the-shelf" which can be used for mechanisms, sensors, or computation that would satisfy the extreme mass and power limitations. With these challenges in mind, this talk will present progress in the essential technologies for insect-scale robots and the latest flight experiments with robotic insects.

http://micro.seas.harvard.edu

 

December 15 - Metin Sitti, Max Planck Institute Mobile Microrobotics (no video)

Untethered mobile microrobots have the unique capability of accessing to small spaces and scales directly. Due to their small size and micron-scale physics and dynamics, they could be agile and portable, and could be inexpensive and in large numbers if they are mass-produced. Mobile microrobots would have high impact applications in health-care, bioengineering, mobile sensor networks, desktop micromanufacturing, and inspection. In this presentation, mobile microrobots from few micrometers up to hundreds of micrometer overall sizes and various locomotion capabilities are presented.  Going down to micron scale, one of the grand challenges for mobile microrobots is miniaturization limitation on on-board actuation, powering, sensing, processing, and communication components.  Two alternative approaches are explored in this talk to solve the actuation and powering challenges.  First, biological cells, e.g. bacteria, attached to the surface of a synthetic microrobot are used as on-board microactuators and microsensors using the  chemical energy inside or outside the cell in physiological fluids. Bacteria-propelled randomly microswimmers are steered using chemical and pH gradients in the environment and remote magnetic fields towards future targeted drug delivery and environmental remediation applications. As the second approach, external actuation of untethered magnetic microrobots using remote magnetic fields in enclosed spaces is demonstrated. New magnetic microrobot locomotion principles based on rotational stick-slip and rolling  dynamics are proposed. Novel magnetic composite materials are used to address and control teams of microrobots and to create novel soft actuators and programmable soft matter. Untethered microrobot teams are demonstrated to manipulate live cells and microgels with embedded cells for bioengineering applications, and to self-assemble into different patterns with remote magnetic control.

 

Bio: Metin Sitti received the BSc and MSc degrees in electrical and electronics engineering from Bogazici University, Istanbul, Turkey, in 1992 and 1994, respectively, and the PhD degree in electrical engineering from the University of Tokyo, Tokyo, Japan, in 1999. He was a research scientist at UC Berkeley during 1999-2002. He is currently a director in Max-Planck Institute for Intelligent Systems and a professor in Department of Mechanical Engineering and Robotics Institute at Carnegie Mellon University. His research interests include small-scale physical intelligence, mobile microrobots, bio-inspired millirobots, smart and soft micro/nanomaterials, and programmable self-assembly. He is an IEEE Fellow. He received the SPIE Nanoengineering Pioneer Award in 2011 and NSF CAREER Award in 2005. He received the IEEE/ASME Best Mechatronics Paper Award in 2014, the Best Poster Award in the Adhesion Conference in 2014, the Best Paper Award in the IEEE/RSJ International Conference on Intelligent Robots and Systems in 2009 and 1998, the first prize in the World RoboCup Micro-Robotics Competition in 2012 and 2013, the Best Biomimetics Paper Award in the IEEE Robotics and Biomimetics Conference in 2004, and the Best Video Award in the IEEE Robotics and Automation Conference in 2002. He is the editor-in-chief of Journal of Micro-Bio Robotics.

 

December 8 - Art Kuo, University of Michigan Robot vs. Human: The Next Round of Legged Locomotion Battles (video, CSAIL Only)

An enduring myth in the world of legged locomotion is that a robot should model itself upon human. The human presents a standard for performance, and a recipe for control strategy, and a blueprint for design. Not only is that myth false, but it has also (fortunately) been ignored. To date, robot locomotion has benefitted from humans and animals, and the understanding of them, only in how many legs to have. The reason is that hardware technology is presently far from making truly human-like locomotion possible, or even a good idea. This raises the question of what the next generation of legged robots should try to be. The correct answer is anything but humans, but even to achieve that means there is reason to understand humans. I will demonstrate a few unique ways that humans walk dynamically, and how they are optimal for humans and therefore suboptimal for robots. From a biomechanical perspective, I will muse on some interesting challenges for future robots that will act more dynamically, and one day perhaps even approach the standard set by humans.

 

Bio: Art Kuo is Professor of Mechanical Engineering and Biomedical Engineering at the University of Michigan. He directs the Human Biomechanics and Control Laboratory, which studies the basic principles of locomotion and other movements, and applies those principles to the development of robotic, assistive, and therapeutic devices to aid humans. Current interests include walking and running on uneven terrain, development of wearable sensors to track foot motion in the wild, and devices to improve the economy of locomotion in the impaired.

 

December 1 - Louis Whitcomb, Johns Hopkins Nereid Under-Ice: A Remotely Operated Underwater Robotic Vehicle for Oceanographic Access Under Ice

This talk reports recent advances in underwater robotic vehicle research  to enable novel oceanographic operations in extreme ocean environments, with focus on two recent novel vehicles developed by a team comprised of the speaker and his collaborators at the Woods Hole Oceanographic Institution. First, the development and operation of the Nereus underwater robotic vehicle will be briefly described, including successful scientific observation and sampling dive operations at hadal depths of 10,903 m. on a NSF sponsored expedition to the Challenger Deep of the Mariana Trench – the deepest place on Earth. Second, development and first sea trials of the new Nereid Under-Ice (UI) underwater vehicle will be described. NUI is a novel remotely-controlled underwater robotic vehicle capable of being teleoperated under ice under remote real-time human supervision. We report the results of NUI’s first under-ice deployments during a July 2014 expedition aboard R/V Polarstern at 83° N 6 W° in the Arctic Ocean – approximately 200 km NE of Greenland. 

 

Bio: Louis L. Whitcomb is Professor and Chairman at the Department of Mechanical Engineering, with secondary appointment in Computer Science, at the Johns Hopkins University’s Whiting School of Engineering. He completed a B.S. in Mechanical Engineering in 1984 and a Ph.D. in Electrical Engineering in 1992 at Yale University. From 1984 to 1986 he was a Research and Development engineer with the GMFanuc Robotics Corporation in Detroit, Michigan. He joined the Department of Mechanical Engineering at the Johns Hopkins University in 1995, after post doctoral fellowships at the University of Tokyo and the Woods Hole Oceanographic Institution. His research focuses on the navigation, dynamics, and control of robot systems – including industrial, medical, and underwater robots. Whitcomb is a principal investigator of the Nereus and Nereid Under-Ice Projects. He is former (founding) Director of the JHU Laboratory for Computational Sensing and Robotics. He received teaching awards at Johns Hopkins in 2001, 2002, 2004, and 2011, was awarded a National Science Foundation Career Award, and an Office of Naval Research Young Investigator Award. He is a Fellow of the IEEE. He is also Adjunct Scientist, Department of Applied Ocean Physics and Engineering, Woods Hole Oceanographic Institution.

 

November 24 - Liam Paull, MIT CSAIL A Cooperative Area Coverage Framework that Accounts for Uncertainty and its Application to Autonomous Seabed Surveying (video)

In this talk, we investigate the area coverage problem with mobile robots whose localization uncertainty is time-varying and significant.
The vast majority of literature on robotics area coverage assumes that the robot's location estimate error is either zero or at least bounded.
We remove this assumption and develop a probabilistic representation of coverage. Once we have formally connected robot sensor uncertainty with the area coverage, we motivate an adaptive sliding window filter pose estimator that is able to provide an arbitrarily close approximate to the full maximum a posteriori estimation with a computation cost that does not grow with time. An adaptive planning strategy is also presented that is able to automatically exploit conditions of low vehicle uncertainty to more aggressively cover area in realtime. This results in faster progress towards the coverage goal than overly conservative planners that assume worst-case error at all times. 

We further extend this to the multi-robot case where robots are able to communicate through a (possibly faulty) channel and make relative measurements of one another. In this case, area coverage can be achieved more quickly since the uncertainty of the robot trajectories will be reduced. We apply the framework to the scenario of mapping an area of seabed with a autonomous marine vehicles for minehunting purposes. The results show that the vehicles are able to achieve complete coverage with high confidence notwithstanding poor navigational sensors and resulting path-lengths are shorter than the worst-case planners.

 

November 17 - Sami Haddadin, Hannover Robots For Humans (video)

Enabling robots for direct physical interaction and cooperation with humans and potentially unknown environments has been one of robotics research primary goals over decades. I will outline how our work on human-centered robot design, control, and planning may let robots for humans become a commodity in our near-future society. For this, we developed new generations of impedance controlled ultra-lightweight robots possibly equipped with Variable Impedance Actuation, previously at DLR, now in my new lab, which are sought to safely act as human assistants and collaborators at high performance over a variety of application domains. These may e.g. involve industrial assembly and manufacturing, medical assistance, or healthcare helpers in everyone's home, but also neurally controlled assistive devices. A recent generation of lightweight robots was commercialized as the KUKA LBR iiwa, which is considered to be the first commercial representative of this new class of robots. Based on a smart mechatronics design, a robot (let it be a manipulator, humanoid or flying system) has to be quipped with and also learn the skills than enable it to perceive and manipulate its' surrounding. Furthermore, it shall deduct according actions for successfully carrying out its given task, possibly in close collaboration with humans. At the same time the primary objective of a robot's action around humans is to ensure that even in case of malfunction or user errors no human shall be harmed, neither its surrounding be damaged. For this, instantaneous, truly human-safe, and intelligent context based force-sensitive controls and reactions to unforeseen events, partly inspired by the human motor control system, become crucial. 

 

Bio: Sami Haddadin is Full Professor and Director of the Institute of Automatic Control (IRT) at Leibniz University Hanover (LUH), Germany. Until 2014 he was Scientific Coordinator "Terrestrial Assistance Systems" and "Human-Centered Robotics" at the DLR Robotics and Mechatronics Center. He was a visiting scholar at Stanford University in 2011 and a consulting scientist of Willow Garage, Inc., Palo Alto until 2013. He received degrees in Electrical Engineering (2006), Computer Science (2009), and Technology Management (2008) from TUM and LMU, respectively. He obtained his PhD with summa cum laude from RWTH Aachen in 2011. His research topics include physical Human-Robot Interaction, nonlinear robot control, real-time motion planning, real-time task and reflex planning, robot learning, optimal control, human motor control, variable impedance actuation, neuro-prosthetics, and safety in robotics. He was in the program/organization committee of several international robotics conferences and a guest editor of IJRR. He is an associate editor of the IEEE Transactions on Robotics. He published more than 100 scientific articles in international journals, conferences, and books. He received five best paper and video awards at ICRA/IROS, the 2008 Literati Best Paper Award, the euRobotics Technology Transfer Award 2011, and the 2012 George Giralt Award. He won the IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award in 2011 and 2013. He is a recipient of the 2015 IEEE/RAS Early Career Award, the 2015 RSS Early Career Spotlight, the 2015 Alfried Krupp Award for Young Professors and was selected as 2015 Capital Young Elite Leader under 40 in Germany for the domain "Politics, State & Society".

 

November 10 - Dmitry Berenson, WPI Toward General-Purpose Manipulation of Deformable Objects (video)

Imagine a robot that could perceive and manipulate rigid objects as skillfully as a human adult. Would a robot that had such amazing capabilities be able to perform the range of practical manipulation tasks we expect in settings such as the home? Consider that this robot would still be unable to prepare a meal, do laundry, or make a bed because these tasks involve deformable object manipulation. Unlike in rigid-body manipulation, where methods exist for general-purpose pick-and-place tasks regardless of the size and shape of the object, no such methods exist for a similarly broad and practical class of deformable object manipulation tasks. The problem is indeed challenging, as these objects are not straightforward to model and have infinite-dimensional configuration spaces, making it difficult to apply established motion planning approaches. Our approach seeks to bypass these difficulties by representing deformable objects using simplified geometric models at both the global and local planning levels. Though we cannot predict the state of the object precisely, we can nevertheless perform tasks such as cable-routing, cloth folding, and surgical probe insertion in geometrically-complex environments. Building on this work, our new projects in this area aim to blend exploration of the model space with goal-directed manipulation of deformable objects and to generalize the methods we have developed to motion planning for soft robot arms, where we can exploit contact to mitigate the actuation uncertainty inherent in these systems.

 

Bio: Dmitry Berenson received a BS in Electrical Engineering from Cornell University in 2005 and received his Ph.D. degree from the Robotics Institute at Carnegie Mellon University in 2011, where he was supported by an Intel PhD Fellowship. He completed a post-doc at UC Berkeley in 2011 and started as an Assistant Professor in Robotics Engineering and Computer Science at WPI in 2012. He founded and directs the Autonomous Robotic Collaboration (ARC) Lab at WPI, which focuses on motion planning, manipulation, and human-robot collaboration.

 

November 3 - Andrea Censi, MIT  Everything is the Same: Monotone Co-Design Problems (video)

I will present some recent work towards developing a "theory of co-design" that is rich enough to represent the trade-offs in the design of complex robotic systems, including the recursive constraints that involve energetics, propulsion, communication, computation, sensing, control, perception, and planning. I am developing a formalism in which "design problems" are the primitive objects, and multiple design problems can be composed to obtain "co-design" problems through operations analogous to series, parallel, and feedback composition. Certain monotonicity properties are preserved by these operations, from which it is possible to conclude existence and uniqueness of minimal feasible design trade-offs, as well as obtaining a systematic solution procedure. The mathematical tools used are the *really elementary* parts of the theory of fixed points on partially ordered sets (Kleene, Tarski, etc), of which no previous knowledge is assumed.  We will conclude that: choosing the smallest battery for a drone, optimizing your controller to work over a network of limited bandwidth, and defining the semantics of programming languages, are one and the same problem.

 

October 27 - Aaron Steinfeld, CMU  Understanding and Creating Appropriate Robot Behavior (video)

End users expect appropriate robot actions, interventions, and requests for human assistance. As with most technologies, robots that behave in unexpected and inappropriate ways face misuse, abandonment, and sabotage. Complicating this challenge are human misperceptions of robot capability, intelligence, and performance. This talk will summarize work from several projects focused on this human-robot interaction challenge. Findings and examples will be shown from work on human trust in robots, deceptive robot behavior, robot motion, robot characteristics, and interaction with humans who are blind. I will also describe some lessons learned from related work in crowdsourcing (e.g., Tiramisu Transit) to help inform methods for enabling and supporting contributions by end users and local experts.

Bio: Aaron Steinfeld is an Associate Research Professor in the Robotics Institute (RI) at Carnegie Mellon University. He received his BSE, MSE, and Ph.D. degrees in Industrial and Operations Engineering from the University of Michigan and completed a Post Doc at U.C. Berkeley. He is the Co-Director of the Rehabilitation Engineering Research Center on Accessible Public Transportation (RERC-APT), Director of the DRRP on Inclusive Cloud and Web Computing, and the area lead for transportation related projects in the Quality of Life Technology Center (QoLT). His research focuses on operator assistance under constraints, i.e., how to enable timely and appropriate interaction when technology use is restricted through design, tasks, the environment, time pressures, and/or user abilities. His work includes intelligent transportation systems, crowdsourcing, human-robot interaction, rehabilitation, and universal design.

 

October 22 - David Held, Stanford University Using Motion to Understand Objects in the Real World (no video)

Many robots today are confined to operate in relatively simple, controlled
environments. One reason for this is that current methods for processing
visual data tend to break down when faced with occlusions, viewpoint
changes, poor lighting, and other challenging but common situations that
occur when robots are placed in the real world. I will show that we can
train robots to handle these variations by inferring the causes behind
visual appearance changes. If we model how the world changes over time, we
can be robust to the types of changes that objects often undergo. I
demonstrate this idea in the context of autonomous driving, and I show how
we can use this idea to improve performance on three different tasks:
velocity estimation, segmentation, and tracking with neural networks. By
inferring the causes of appearance changes over time, we can make our
methods more robust to a variety of challenging situations that commonly
occur in the real-world, thus enabling robots to come out of the factory
and into our lives.

 

Bio: David Held is a Computer Science Ph.D. student at Stanford working
with Sebastian Thrun and Silvio Savarese. He research interests include
robotics, vision, and machine learning, with applications to tracking and
object detection for autonomous driving. David has previously been a
researcher at the Weizmann Institute and has worked in industry as a
software developer. David has a Master's Degree in Computer Science from
Stanford and B.S. and M.S. degrees in Mechanical Engineering from MIT.
 

October 20 - Robotics Student/Faculty Mixer

 

October 7 - Matt Klingensmith, CMU  Articulated SLAM (no video)

Uncertainty is a central problem in robotics. In order to understand and interact with the world, robots need to interpret signals from noisy sensors to reconstruct clear models not only of the world around them, but also their own internal state. For example, a mobile robot navigating an unknown space must simultaneously reconstruct a model of the world around it, and localize itself against that model using noisy sensor data from wheel odometry, lasers, cameras, or other sensors. This problem (called the SLAM problem) is very well-studied in the domain of mobile robots. Less well-studied is the equivalent problem for robot manipulators. That is, given a multi-jointed robot arm with a noisy hand-mounted sensor, how can the robot simultaneously estimate its state and generate a coherent 3D model of the world? We call this the articulated SLAM problem.

Given actuator uncertainty and sensor uncertainty, what is the correct way of simultaneously constructing a model of the world and estimating the robot's state? In this work, we show that certain contemporary visual SLAM techniques can be mapped to the articulated SLAM problem by using the robot's joint configuration space as the state space for localization, rather than the typical SE(3). We map one kind of visual SLAMt technique, Kinect Fusion, to the robot's configuration space, and show how the robot's joint encoders can be used appropriately to inform the pose of the camera. The idea that the configuration of the robot is not merely a sensor which informs the pose of the camera, but rather it is the underlying latent state of the system is critical to our analysis. Tracking the configuration of the robot directly allows us to build algorithms on top of the SLAM system which depend on knowledge of the joint angles (such as motion planners and control systems).

 

 

Spring 2015 Campus-wide Robotics Seminar (sponsored by Aurora Flight Sciences and The MathWorks)  (11am-noon in 1-190)

Seminar series youtube channel

May 12 - Dieter Fox, UW  RGB-D Perception in Robotics

RGB-D cameras provide per pixel color and depth information at high frame rate and resolution. Gaming and entertainment applications such as the Microsoft Kinect system resulted in the mass production of RGB-D cameras at extremely low cost, also making them available for a wide range of robotics applications. In this talk, I will provide an overview of depth camera research done in the Robotics and State Estimation Lab over the last six years. This work includes 3D mapping of static and dynamic scenes, autonomous object modeling and recognition, and articulated object tracking.

 

Bio: Dieter Fox is a Professor in the Department of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. From 2009 to 2011, he was also Director of the Intel Research Labs Seattle. He currently serves as the academic PI of the Intel Science and Technology Center for Pervasive Computing hosted at UW. Dieter obtained his Ph.D. from the University of Bonn, Germany. Before going to UW, he spent two years as a postdoctoral researcher at the CMU Robot Learning Lab. Fox's research is in artificial intelligence, with a focus on state estimation applied to robotics and activity recognition. He has published over 150 technical papers and is co-author of the text book "Probabilistic Robotics". He is an IEEE and a AAAI fellow, and received several best paper awards at major robotics and AI conferences. He is an editor of the IEEE Transactions on Robotics, was program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and served as the program chair of the 2013 Robotics: Science and Systems conference.

May 5 - Russ Tedrake, MIT - CSAIL  MIT's Entry in the DARPA Robotics Challenge: Real-world, Interactive-rate Optimization for Humanoid Robots

On June 5-6 of this year, 25 of the most advanced robots in the world will descend on Pomona, California to compete in the final DARPA Robotics Challenge competition (http://theroboticschallenge.org). Each of these robots will be sent into a disaster response situation to perform complex locomotion and manipulation tasks with limited power and comms. Team MIT is one of only 2 academic teams that has survived all of the qualifying rounds, and we are working incredibly hard to showcase the power of our relatively formal approaches to perception, estimation, planning, and control.

 

In this talk, I’ll dig into a number of technical research nuggets that have come to fruition during this effort, including an optimization-based planning and control method for robust and agile online gait and manipulation planning, efficient mixed-integer optimization for negotiating rough terrain, convex relaxations for grasp optimization, powerful real-time perception systems, and essentially drift-free state estimation. I’ll discuss the formal and practical challenges of fielding these on a very complex (36+ degree of freedom) humanoid robot that absolutely has to work on game day.

Relevant URLs: http://drc.mit.edu, http://youtube.com/mitdrc

Apr 28 - Ioannis Poulakakis, University of Delaware  Legged Robots Across Scales: Integrating Motion Planning and Control through Canonical Locomotion Models

Abstract: On a macroscopic level, legged locomotion can be understood through reductive canonical models -- often termed templates -- the purpose of which is to capture the dominant features of an observed locomotion behavior without delving into the fine details of a robot’s (or animal’s) structure and morphology. Such models offer unifying, platform-independent, descriptions of task-level behaviors, and inform control design for legged robots. This talk will discuss reductive locomotion models for diverse legged robots, ranging from slow-moving, palm-size, eight-legged crawlers to larger bipeds and quadrupeds, and will focus on the role of such models in integrating locomotion control and motion planning within a unifying framework that translates task-level specifications to suitable low-level control actions that harness the locomotion capabilities of the robot platforms.

 

Bio: Prof. Poulakakis earned his Ph.D. in Electrical Engineering from the University of Michigan in 2008, served as a postdoctoral research associate at Princeton University for two years, and then joined the Department of Mechanical Engineering at the University of Delaware in 2010 as an Assistant Professor. His research interests are in the area of dynamics and control with application to bio-inspired robotic systems, specifically legged robots. In 2014 he received a Faculty Early Career Development Award from the National Science Foundation to investigate task planning and motion control for legged robots at different scales.

Apr 21 - No seminar - MIT MONDAY SCHEDULE (due to Patriots Day)

Apr 14 - Ted Adelson, MIT  GelSight sensors for high resolution touch sensing in robotics, and many other things

GelSight is a technology for high resolution touch sensing, which has a wide range of applications, some unexpected. A sensor consists of a slab of clear elastomer covered with a reflective membrane, along with an embedded camera and light system. The goal was to build a robot fingertip that could match the softness and sensitivity of human skin. Using machine vision (mainly photometric stereo) one can touch a surface and quickly derive high resolution 3D geometry, allowing estimates of shape, texture, and force. By adding internal markers one can estimate tangential interactions (friction, shear and slip). With collaborators we are learning how to use this information in robotic manipulation and surface sensing. GelSight’s extraordinarily high resolution has also led to a spin-off company, GelSight Inc., which makes instruments that measure the micron scale 3D geometry. Variants are being used commercially to support 3D printing, to enable forensics on bullet casings, to study human skin, and (in a large version) to measure feet for custom insoles.

Apr 7 - Allison Okamura, Stanford University Department of Mechanical Engineering  Modeling, Planning, and Control for Robot-Assisted Medical Interventions

Abstract: Many medical interventions today are qualitatively and quantitatively limited by human physical and cognitive capabilities. This talk will discuss several robot-assisted intervention techniques that will extend humans' ability to carry out interventions more accurately and less invasively. First, I will describe the development of minimally invasive systems that deliver therapy by steering needles through deformable tissue and around internal obstacles to reach specified targets. Second, I will review recent results in haptic (touch) feedback for robot-assisted teleoperated surgery, in particular the display of tissue mechanical properties. Finally, I will demonstrate the use of dynamic models of the body to drive novel rehabilitation strategies. All of these systems incorporate one or more key elements of robotic interventions: (1) quantitative descriptions of patient state, (2) the use of models to plan interventions, (3) the design of devices and control systems that connect information to physical action, and (4) the inclusion of human input in a natural way.

 

Biosketch: Allison M. Okamura received the BS degree from the University of California at Berkeley in 1994, and the MS and PhD degrees from Stanford University in 1996 and 2000, respectively, all in mechanical engineering. She is currently an Associate Professor in the mechanical engineering department at Stanford University, with a courtesy appointment in Computer Science. She is Editor-in-Chief of the IEEE International Conference on Robotics and Automation and an IEEE Fellow. Her academic interests include haptics, teleoperation, virtual and augmented reality, medical robotics, neuromechanics and rehabilitation, prosthetics, and engineering education. Outside academia, she enjoys spending time with her husband and two children, running, and playing ice hockey. For more information about her research, please see the Collaborative Haptics and Robotics in Medicine (CHARM) Laboratory website: http://charm.stanford.edu.

Mar 31 - student/faculty mixer

Mar 24 - MIT SPRING VACATION Special : Frank Dellaert, Georgia Tech  Factor Graphs for Flexible Inference in Robotics and Vision

Abstract: Simultaneous Localization and Mapping (SLAM) and Structure from Motion (SFM) are important and closely related problems in robotics and vision. I will show how both SLAM and SFM instances can be posed in terms of a graphical model, a factor graph, and that inference in these graphs can be understood as variable elimination. The overarching theme of the talk will be to emphasize the advantages and intuition that come with seeing these problems in terms of graphical models. For example, while the graphical model perspective is completely general, linearizing the non-linear factors and assuming Gaussian noise yields the familiar direct linear solvers such as Cholesky and QR factorization. Based on these insights, we have developed both batch and incremental algorithms defined on graphs in the SLAM/SFM domain. I will also discuss my recent work on using polynomial bases for trajectory optimization, inspired by pseudospectral optimal control, which is made easy by the new Expressions language in GTSAM 4, currently under development.

Bio: Frank Dellaert is currently on leave from the Georgia Institute of Technology for a stint as Chief Scientist of Skydio, a startup founded by MIT grads to create intuitive interfaces for micro-aerial vehicles. When not on leave, he is a Professor in the School of Interactive Computing and Director of the Robotics PhD program at Georgia Tech. His research interests lie in the overlap of Robotics and Computer vision, and he is particularly interested in graphical model techniques to solve large-scale problems in mapping and 3D reconstruction. You can find out about his group’s research and publications at https://borg.cc.gatech.edu and http://www.cc.gatech.edu/~dellaert. The GTSAM toolbox which embodies many of the ideas his group has worked on in the past few years is available for download at http://tinyurl.com/gtsam. But really hardcore users can ask to be plugged into our BitBucket motherlode. Just send mail to frank@cc.gatech.edu.

Mar 17 - Leslie Pack Kaelbling, MIT CSAIL  Making Robots Behave

The fields of AI and robotics have made great improvements in many individual subfields, including in motion planning, symbolic planning, probabilistic reasoning, perception, and learning. Our goal is to develop an integrated approach to solving very large problems that are hopelessly intractable to solve optimally. We make a number of approximations during planning, including serializing subtasks, factoring distributions, and determinizing stochastic dynamics, but regain robustness and effectiveness through a continuous state-estimation and replanning process. This approach is demonstrated in three robotic domains, each of which integrates perception, estimation, planning, and manipulation.

Mar 10 - Hanu Singh, Woods Hole Oceanographic Institute  Bipolar Robotics: Exploring the Arctic and the Antarctic with a stop for some Coral Reef Ecology in the Middle

The Arctic and Antarctic remain one of least explored parts of the world's oceans. This talk looks at efforts over the last decade to explore areas under-ice which have traditionally been difficult to access. The focus of the talk will be on the robots, the role of communications over low bandwidth acoustic links, navigation and imaging and mapping methodologies. This issues will all be discussed within the context of real data collected on several expeditions to the Arctic and Antarctic.
http://www.whoi.edu/DSL/hanu
http://www.whoi.edu/oceanus/feature/the-jetyak
http://polardiscovery.whoi.edu/expedition2/index.html

Mar 3 - Brandon Basso, UC Berkeley  The 3D Robotics Open UAV Platform

3D Robotics is a venture-backed aerospace startup in Berkeley, California. At the heart of our platform is the the Pixhawk autopilot which runs on more UAVs in the world than any other autopilot and represents the worlds largest open source robotics project, Ardupilot. This talk will explore the technological advancements that have enabled an entirely open and viral UAV platform, from low-level estimation to high-level system architecture. Two recent advancements will be explored in detail: Efficient algorithms for state estimation using low-cost IMUs, and cloud-based architecture for real-time uplink and downlink from any internet-connected vehicle. Some concluding thoughts on future platform evolution and the growing consumer drone space will be presented.

Feb 24 - Russell H. Taylor, The Johns Hopkins University  Medical Robotics and Computer-Integrated Interventional Medicine

Computer-integrated interventional systems (CIIS) combine innovative algorithms, robotic devices, imaging systems, sensors, and human-machine interfaces to work cooperatively with surgeons in the planning and execution of surgery and other interventional procedures. The impact of CIIS on medicine in the next 20 years will be as great as that of Computer-Integrated Manufacturing on industrial production over the past 20 years. A novel partnership between human surgeons and machines, made possible by advances in computing and engineering technology, will overcome many of the limitations of traditional surgery. By extending human surgeons’ ability to plan and carry out surgical interventions more accurately and less invasively, CIIS systems will address a vital need to greatly reduce costs, improve clinical outcomes, and improve the efficiency of health care delivery.
This talk will describe past and emerging research themes in CIIS systems and illustrate them with examples drawn from our current research activities within Johns Hopkins University’s Engineering Research Center for Computer Integrated Surgical systems and Technology

Biography

Russell H. Taylor received his Ph.D. in Computer Science from Stanford in 1976. He joined IBM Research in 1976, where he developed the AML robot language and managed the Automation Technology Department and (later) the Computer-Assisted Surgery Group before moving in 1995 to Johns Hopkins, where he is the John C. Malone Professor of Computer Science with joint appointments in Mechanical Engineering, Radiology, and Surgery and is also Director of the Engineering Research Center for Computer-Integrated Surgical Systems and Technology (CISST ERC) and of the Laboratory for Computational Sensing and Robotics (LCSR). He is the author of over 375 peer-reviewed publications, a Fellow of the IEEE, of the AIMBE, of the MICCAI Society, and of the Engineering School of the University of Tokyo. He is also a recipient of numerous awards, including the IEEE Robotics Pioneer Award, the MICCAI Society Enduring Impact Award, and the Maurice Müller Award for Excellence in Computer-Assisted Orthopaedic Surgery.


Fall 2014 Campus-wide Robotics Seminar

Dec 9 - Tim Bretl, U Illinois Urbana-Champaign   Mechanics, Manipulation, and Perception of an Elastic Rod (video)

Abstract: This talk is about robotic manipulation of canonical "deformable linear objects" like a Kirchhoff elastic rod (e.g., a flexible wire). I continue to be amazed by how much can be gained by looking carefully at the mechanics of these objects and at the underlying mathematics. For example, did you know that the free configuration space of an elastic rod is path-connected? I'll prove it, and tell you why it matters.

Bio: Timothy Bretl comes from the University of Illinois at Urbana-Champaign, where he is an Associate Professor of Aerospace Engineering and of the Coordinated Science Laboratory.

Website: http://bretl.csl.illinois.edu/

Photo: http://goo.gl/F7BpMz

Dec 2 - Steve LaValle, Professor, University of Illinois & Principal Scientist, Oculus/Facebook   Robotics Meets Virtual Reality (video)

Abstract: Roboticists are well positioned to strongly impact the rising field of virtual reality (VR). Using the latest technology, we can safely take control of your most trusted senses, thereby fooling your brain into believing you are in another world. VR has been around for a long time, but due to the recent convergence of sensing, display, and computation technologies, there is an unprecedented opportunity to explore this form of human augmentation with lightweight, low-cost materials and simple software platforms. Many of the issues are familiar to roboticists, such as position and orientation tracking from sensor data, maintaining features from vision data, and dynamical system modeling. In addition, there is an intense form of human-computer interaction (HCI) that requires re-examining core engineering principles with a direct infusion of perceptual psychology research. With the rapid rise in consumer VR, fundamental research questions are popping up everywhere, slicing across numerous disciplines from engineering to sociology to film to medicine. This talk will provide some perspective on where we have been and how roboticists can help participate in this exciting future!

Bio: Steve LaValle started working with Oculus VR in September 2012, a few days after their successful Kickstarter campaign, and was the head scientist up until the Facebook acquisition in March 2014. He developed perceptually tuned head tracking methods based on IMUs and computer vision. He also led a team of perceptual psychologists to provide principled approaches to virtual reality system calibration and the design of comfortable user experiences. In addition to his continuing work at Oculus, he is also Professor of Computer Science at the University of Illinois, where he joined in 2001. He has worked in robotics for over 20 years and is known for his introduction of the Rapidly exploring Random Tree (RRT) algorithm of motion planning and his 2006 book, Planning Algorithms.

Website: http://msl.cs.uiuc.edu/~lavalle/

Nov 25 - Richard Newcombe, University of Washington Andrea Censi, MIT LIDS

Robotics video session : screening/voting session for the ICRA 2015 trailer

Nov 18 - Sachin Patil, UC Berkeley   Coping with Uncertainty in Robotic Navigation, Exploration, and Grasping

A key challenge in robotics is to robustly complete navigation, exploration, and manipulation tasks when the state of the world is uncertain. This is a fundamental problem in several application areas such as logistics, personal robotics, and healthcare where robots with imprecise actuation and sensing are being deployed in unstructured environments. In such a setting, it is necessary to reason about the acquisition of perceptual knowledge and to perform information gathering actions as necessary. In this talk, I will present an approach to motion planning under motion and sensing uncertainty called "belief space" planning where the objective is to trade off exploration (gathering information) and exploitation (performing actions) in the context of performing a task. In particular, I will present how we can use trajectory optimization to compute locally-optimal solutions to a determinized version of this problem in Gaussian belief spaces. I will show that it is possible to obtain significant computational speedups without explicitly optimizing over the covariances by considering a partial collocation approach. I will also address the problem of computing such trajectories, given that measurements may not be obtained during execution due to factors such as limited field of view of sensors and occlusions. I will demonstrate this approach in the context of robotic grasping in unknown environments where the robot has to simultaneously explore the environment and grasp occluded objects whose geometry and positions are initially unknown.

Nov 4 - Mark Cutkosky, Stanford   Bio-Inspired Dynamic Surface Grasping (video)

The adhesive system of the gecko has several remarkable properties that make it ideal for agility on vertical and overhanging surfaces. It requires very little preload for sticking, and (unlike sticky tape) very little effort to detach. It resists fouling when the gecko travels over dusty surfaces, and it is controllable: the amount of adhesion in the normal direction depends on the applied tangential force. Moreover, it is fast, allowing the gecko to climb at speeds of a meter per second. The desirable properties of the gecko's adhesive apparatus are a result of its unique, hierarchical structure, with feature sizes ranging from hundreds of nanometers to millimeters. Over the last several years, analogous features have been incorporated into various synthetic gecko-inspired adhesives, with gradually improving performance from the standpoints of adhesion, ease and speed of attachment and detachment, etc. In this talk we will explore recent developments to scale gecko-inspired directional adhesives beyond small wall-climbing robots to new applications including perching quadrotors and grappling space debris in orbit. These applications require scaling the adhesives to areas of 10x10cm or larger on flat or curved surfaces without loss in performance, and attachment in milliseconds to prevent bouncing. The solutions draw some inspiration from the arrangement of tendons and other compliant structures in the gecko's toe.

Oct 28 - Robotics student/faculty mixer

Oct 21 - Anca Dragan, Carnegie Mellon  Interaction as Manipulation (video)

The goal of my research is to enable robots to autonomously produce behavior that reasons about function _and_ interaction with and around people. I aim to develop a formal understanding of interaction that leads to algorithms which are informed by mathematical models of how people interact with robots, enabling generalization across robot morphologies and interaction modalities.

In this talk, I will focus on one specific instance of this agenda: autonomously generating motion for coordination during human-robot collaborative manipulation. Most motion in robotics is purely functional: industrial robots move to package parts, vacuuming robots move to suck dust, and personal robots move to clean up a dirty table. This type of motion is ideal when the robot is performing a task in isolation. Collaboration, however, does not happen in isolation, and demands that we move beyond purely functional motion. In collaboration, the robot's motion has an observer, watching and interpreting the motion – inferring the robot's intent from the motion, and anticipating the robot's motion based on its intent. My work develops a mathematical model of these inferences, and integrates this model into motion planning, so that the robot can generate motion that matches people's expectations and clearly conveys its intent. In doing so, I draw on action interpretation theory, Bayesian inference, constrained trajectory optimization, and interactive learning. The resulting motion not only leads to more efficient collaboration, but also increases the fluency of the interaction as defined through both objective and subjective measures. The underlying formalism has been applied across robot morphologies, from manipulator arms to mobile robots, and across interaction modalities, such as motion, gestures, and shared autonomy with assistive arms.

Oct 14 - Sangbae Kim, MIT The actuation and the control of the MIT Cheetah (video)

Biological machines created by millions of years of evolution suggest a paradigm shift in robotic design. Realizing animals’ magnificent locomotive capabilities is next big challenge in mobile robotic applications. The main theme of MIT Biomimetic Robotics Laboratory is innovation through ‘principle extraction’ from biology. The embodiment of such innovations includes Stickybot that employs the world’s first synthetic directional dry adhesive inspired by geckos, and the MIT Cheetah, designed after the fastest land animal. The design principles in structures, actuation and control algorithms applied in the MIT Cheetah will be presented during the talk. The Kim’s creations are opening new frontiers in robotics and leading to advanced mobile robots that can save lives in dangerous situations, and new all-around robotic transportation systems for the mobility-impaired.

Oct 7 - Nick Roy, MIT  Project Wing: Self-flying vehicles for Package Delivery

Autonomous UAVs, or "self-flying vehicles", hold the promise of transforming a number of industries, and changing how we move things around the world. Building from the foundation of decades of research in autonomy and UAVs, Google launched Project Wing in 2012 and recently announced trials of a delivery service using a small fleet of autonomous UAVs in Australia. In this talk, I will provide an introduction to the work Google has been doing in developing this service, describe the capabilities (and limitations) of the vehicles, and talk briefly about the promise of UAVs in general.