IROS 2018 Workshop

Machine Learning in Robot Motion Planning

October 5, 2018 | Madrid, Spain


Location

Workshop: Main auditorium.

Poster: Hall 2-B (please setup only after 2.00 p.m.)


Important Dates

(deadlines are "anywhere on earth")

Aug 15 Aug 31
Extended abstract submission deadline
Sep 14
Notification of acceptance
Sep 28
Camera ready deadline for full paper
Oct 5
Workshop

Description

Motion planning has a rich and varied history. The bulk of the research in planning has focused on development of tractable algorithms with provable worst-case performance guarantees. In contrast, well-understood theory and practice in machine learning is concerned with expected performance (e.g. supervised learning). As affordable sensors, actuators and robots that navigate, interact and collect data proliferate, we are motivated to examine new algorithmic questions such as " What roles can statistical techniques play in overcoming traditional bottlenecks in planning?", " How do we maintain worst-case performance guarantees while leveraging learning to improve expected performance?" and " How can common limitations inherited from data-driven methods (e.g. covariate shift) be mitigated while combining with traditional planning methods? "

Both areas have much to contribute to each other in terms of methods, approaches, and insights, and yet motion planning and machine learning communities remain largely disjoint groups. There are four technical goals for this workshop in addition to encouraging dialogue between both communities:

  • Formalize paradigms in motion planning where statistical methods can play an essential role.
  • Identify learning algorithms that can alleviate planning bottlenecks.
  • Better understand common pitfalls of naively combining learning and planning and explore strategies for mitigation.
  • Arrive at a set of critical open questions that are at the intersection of the two fields.


Important Dates

(deadlines are "anywhere on earth")

Aug 15 Aug 31
Extended abstract submission deadline
Sep 14
Notification of acceptance
Sep 28
Camera ready deadline for full paper
Oct 5
Workshop

Submissions

We solicit 3 page extended abstracts (page counts do not include references). On acceptance, the camera ready version can be a full paper upto 6 pages (excluding references). Submissions can include original research, position papers, and literature reviews that bridge the research areas for this workshop. Submissions will be externally reviewed, and selected based on technical content and ability to positively contribute to the workshop. All accepted contributions will be presented in interactive poster sessions. A subset of accepted contributions will be featured in the workshop as spotlight presentations.

The following list contains some areas of interest, but work in other areas is also welcomed:

  • machine learning in planning and related topics
  • learning representations for planning
  • planning with learnt models
  • learning heuristics in search
  • learning sampling techniques
  • resource allocation in planning
  • learning in sequential decision making settings
  • sample efficient learning
  • learning robust models to deal with distribution shifts
  • bayesian models and novelty detection in decision making
  • online learning in decision making
  • learning applied to task and motion planning

We will accept papers in the official IEEE templates (LaTeX and Word). Submissions must meet page restrictions (maximum of 3 pages for extended abstracts and 6 pages for full papers), but can include additional pages as long as those pages only contain references. Reviewing will not be double blind. Please do not anonymize the submission.

Papers and abstracts should be submitted through the following link: https://cmt3.research.microsoft.com/MLMP2018/ .


Presentations

All accepted contributions will be presented in interactive poster sessions. We strongly recommend adhering to the following poster size:

Potrait configuration: 32 inch (width) x (40 inch height)

We derived this size from the following data. Each poster stand has a usable area of 74 inch (width) x 38 inch (height). This area will be split among two posters appearing side by side. Unfortunately, IROS organizers have notified us of limited availability of stands and space. Hence we kindly urge the presenters to adhere to the specified dimensions.

Note that the poster session will take place in a different room from the main workshop. This room is available from 2.00 p.m to 7.00 p.m. The presenters should setup soon after lunch and be near their stands. Please check the schedule for the room number and the timings.

Contributions selected for spotlight presentations should prepare a 5 minute talk. This will be followed by 1 minute of audience questions. During this time the next presenter should set up. All presenters should check in during the first coffee break and verify display settings. Please check the schedule for the presentation order.


Program

Location

Workshop: Main auditorium.

Poster: Hall 2-B (please setup only after 2.00 p.m.)

Time Topic Speaker
08:45 - 09:00am Introduction Sanjiban Choudhury
09:00 - 09:40am Learning in Heuristic Search-based Planning [pdf] Maxim Likhachev
09:40 - 10:20am Is motion planning overrated? [pdf] Jeannette Bohg
10:20 - 11:00am Robot Decision-Making under Uncertainty: From Data to Actions David Hsu
11:00 - 11:30am Coffee break
11:30 - 12:10pm Spotlight Talks
12:10 - 12:50pm What can we learn from demonstrations? [pdf] Marc Toussaint
12:50 - 13:30pm Robot Navigation: From Abilities to Capabilities [pdf] Aleksandra Faust
13:30 - 14:30pm Lunch
14:30 - 15:10pm The Experienced Piano Movers Problem: New Piano. New Home. Same Mover. [pdf] Siddhartha Srinivasa
15:10 - 15:50pm Dealing with Dead Ends in Goal-Directed Reinforcement Learning Andrey Kolobov
15:50 - 16:30pm Machine Learning for Planning and Control [pdf] Byron Boots
16:30 - 17:00pm Coffee break + Poster Session
17:00 - 17:30pm Poster Session
17:30 - 18:30pm Panel Discussions All Invited Speakers

Invited Talks

likachev

Maxim Likhachev, Carnegie Mellon University

Learning in Heuristic Search-based Planning

In this talk I will first briefly go over different ways how learning can be integrated into Search-based Planning. I will then talk in more details about our recent and ongoing work on speeding up Search-based Planning from experiences, our work on learning planning dimensions based on demonstrations and our ongoing work on integrating offline skill learning into Search-based Planning. I will mostly utilize examples from mobile manipulation to illustrate our work.

[pdf]
bohg

Jeannette Bohg, Stanford University

Is Motion Planning Overrated?

I present a fully-integrated system that emphasises the importance of continuous, real-time perception and its tight integration with reactive motion generation methods for robotic grasping and manipulation in a dynamic and uncertain environment. We extensively evaluated this system and compared against baselines in manipulation scenarios that exhibit either challenging workspace geometry or a dynamic environment. We found that pure feedback control brings us surprisingly far. But one can also easily construct scenarios that demand look-ahead over a longer time horizon. This system does not rely on any machine learning but is purely based on optimisation and model-based approaches. Nevertheless, these findings have implications for what kind of problems in motion generation we may want to address through machine learning. I will specifically focus on relaxing some of the assumptions that are made in the above manipulation system. I will present our most recent work on combining learned and analytical models within a model-predictive controller. The controller consumes a sequence of images as input and outputs an optimised sequence of actions over a certain time horizon. We show how this combination addresses the limitation of covariate shift found in purely data-driven methods.

[pdf]
hsu

David Hsu, National University of Singapore

Robot Decision-Making under Uncertainty: From Data to Actions

Planning and learning are two primary approaches to robot intelligence. Planning enables us to reason about the consequence of immediate actions far into the future, but it requires accurate world models, which are often difficult to acquire in practice. Policy learning circumvents the need for models and learns from data a direct mapping from robot perceptual inputs to actions. However, without models, it is much more difficult to generalize and adapt learned policies to new contexts. In this talk, I will present our recent work on robust robot decision-making under uncertainty through planning, through learning, most importantly by integrating planning and learning. This integration (i) improves planning by learning a model optimized for a specific planning algorithm and (ii) improves learning by incorporating the planning algorithm as a structure prior. It seamlessly fuses model-based planning and model-free learning.

toussaint

Marc Toussaint, University of Stuttgart

What can we learn from demonstrations?

[pdf]
faust

Aleksandra Faust, Google Brain

Robot Navigation: From Abilities to Capabilities

Robot sensors, geometry, dynamics, and even type and quality of actuators define robot’s motion abilities. The abilities determine how the basic motion skills, such as reaching a goal or following paths without collisions, look for a particular robot. Robots with noiser sensors might move slower, while smaller robots might be able to fit through more narrow spaces. The basic motion skills can be learned through exploration and interaction with environments, independently from specific navigation tasks and environments. Once learned, the robot can use them as building blocks for more complex navigation capabilities in different environments. In this talk, we present learning robust, basic motion skills, short-distance goal navigation and path following motions, which take noisy sensor observations as inputs and output wheel velocities. Next, we examine building up from the basic motion skills. We discuss navigation through indoor buildings from floor maps, and navigation by following natural language instructions.

[pdf]
srinivasa

Siddhartha Srinivasa, University of Washington

The Experienced Piano Movers Problem: New Piano. New Home. Same Mover

[pdf]
kolobov

Andrey Kolobov, Microsoft Research

Dealing with Dead Ends in Goal-Directed Reinforcement Learning

In many reinforcement learning scenarios, the agent strives to achieve a goal by ending up in a desirable state. Learning a good policy to get to it, in itself an involved problem due to world dynamics being generally unknown to the agent at the start, can be further complicated by the presence of dead ends --- states from which reaching the goal is impossible. Running into a dead end in the physical world means, at best, a premature end of an agent's mission, as in the case of Mars rover Spirit that got stuck in unexpectedly treacherous terrain. Even in simulated environments dead ends cause serious computational issues, slowing exploration and causing RL algorithms oblivious of them to produce unsafe policies. This talk will discuss principled optimization objectives that take dead ends into account, survey methods for maximizing them when MDP dynamics are known, and pose research questions about dealing with dead ends in model-free settings.

boots

Byron Boots, Georgia Institute of Technology

Machine Learning for Planning and Control

[pdf]


Spotlight Talks

ID Title Authors
#9 Model-Based Reinforcement Learning via Meta-Policy Optimization Jonas Rothfuss, Ignasi Clavera, John Schulman, Tamim Asfour and Pieter Abbeel
#11 Learning a Value Function Based Heuristic for Physics Based Manipulation Planning in Clutter Wissam Bejjani, Rafael Papallas, Matteo Leonetti and Mehmet Dogar
#13 Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models Kurtland Chua, Roberto Calandra, Rowan McAllister and Sergey Levine
#22 Safe Reinforcement Learning with Model Uncertainty Estimates Björn Lütjens, Michael Everett and Jonathan How
#26 Towards Learning Disentangled Objective Functions for Motion Planning with Value Iteration Networks Jim Mainprice and Jie Zhong
#29 Non-prehensile Rearrangement Planning with Learned Manipulation States and Actions Joshua A Haustein, Isac Arnekvist, Johannes A. Stork, Kaiyu Hang and Danica Kragic

Poster Presentations

ID Title Authors
#9 Model-Based Reinforcement Learning via Meta-Policy Optimization Jonas Rothfuss, Ignasi Clavera, John Schulman, Tamim Asfour and Pieter Abbeel
#10 Intention-based motion-adaptation using dynamical systems with human in the loop Mahdi Khoramshahi, Aude Billard
#11 Learning a Value Function Based Heuristic for Physics Based Manipulation Planning in Clutter Wissam Bejjani, Rafael Papallas, Matteo Leonetti and Mehmet Dogar
#13 Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models Kurtland Chua, Roberto Calandra, Rowan McAllister and Sergey Levine
#15 Discontinuity-Sensitive Optimal Trajectory Learning by Mixture of Experts Gao Tang and Kris Hauser
#16 Deep Convolutional Terrain Assessment for Visual Reactive Footstep Correction on Dynamic Legged Robots Octavio Villarreal, Victor Barasuol, Marco Camurri, Michele Focchi, Luca Franceschi, Massimiliano Pontil, Darwin G. Caldwell and Claudio Semini
#17 Provable Infinite-Horizon Real-Time Planning for Repetitive Tasks Fahad Islam, Oren Salzman and Maxim Likhachev
#18 Learning to Utilize Context to Plan Beyond Sensing Horizon Michael Everett and Jonathan How
#19 Neural End-to-End Learning of Reach for Grasp Ability with a 6-DoF Robot Arm Hadi Beik Mohammadi, Michael Görner, Manfred Eppe, Stefan Wermter, Matthias Kerzel and Mohammad Ali Zamani
#20 Learning Adaptive Sampling Distributions for Motion Planning by Self-Imitation Ratnesh Madaan, Sam Zeng, Brian Okorn and Sebastian Sebastian Scherer
#21 Planning to Poke: Sampling-based Planning with Self-Explored Neural Forward Models Michael Görner, Lars Henning Kayser, Matthias Kerzel, Stefan Wermter and Jianwei Zhang
#22 Safe Reinforcement Learning with Model Uncertainty Estimates Björn Lütjens, Michael Everett and Jonathan How
#23 Deep sequential models for sampling-based planning Yen-Ling Kuo, Andrei Barbu and Boris Katz
#24 BAgger: A Bayesian Algorithm for Safe and Query-efficient Imitation Learning Constantin Cronrath, Emilio Jorge, John Moberg, Mats Jirstrand and Bengt Lennartson
#25 Rapidly Exploring Random Search Explorer Aakriti K Upadhyay and Chinwe Ekenna
#26 Towards Learning Disentangled Objective Functions for Motion Planning with Value Iteration Networks Jim Mainprice and Jie Zhong
#27 Learning structured transition models for multi-object manipulation Victoria Xia, Zi Wang and Leslie Kaelbling
#28 Deep Conditional Generative Models for Heuristic Search on Graphs with Expensive-to-Evaluate Edges Brian Hou and Siddhartha Srinivasa
#29 Non-prehensile Rearrangement Planning with Learned Manipulation States and Actions Joshua A Haustein, Isac Arnekvist, Johannes A. Stork, Kaiyu Hang and Danica Kragic

Organizers

Organizing Committee

choudhury

Sanjiban Choudhury

University of Washington

          dey

Debadeepta Dey

Microsoft Research

srinivasa

Siddhartha Srinivasa

University of Washington

          toussaint

Marc Toussaint

University of Stuttgart

boots

Byron Boots

Georgia Institute of Technology

         


Program Committee

Lydia Kavraki, Rice University

Geoff Hollinger, Oregon State University

Marco Pavone, Stanford University

Paloma Sodhi, Carnegie Mellon University

Chinwe Ekenna, University of Albany

Aleksandra Faust, Google Brain

Andrey Kolobov, Microsoft Research

Oren Salzman, Carnegie Mellon University

Shushman Choudhury, Stanford University

Luigi Palmieri, Bosch

Shawna Thomas, Texas A&M University

Maxim Likhachev, Carnegie Mellon University

David Hsu, National University of Singapore

Gilwoo Lee, University of Washington

Jeannette Bohg, Stanford University

Peter Englert, University of Stuttgart