Work Smarter Not Harder: Simple Imitation Learning with CS-PIBT Outperforms Large Scale Imitation Learning for MAPF

1Carnegie Mellon University, 2Solon High School
ICRA 2025

Abstract

Multi-Agent Path Finding (MAPF) is the problem of effectively finding efficient collision-free paths for a group of agents in a shared workspace. The MAPF community has largely focused on developing high-performance heuristic search methods. Recently, several works have applied various machine learning (ML) techniques to solve MAPF, usually involving sophisticated architectures, reinforcement learning techniques, and set-ups, but none using large amounts of high-quality supervised data. Our initial objective in this work was to show how simple large scale imitation learning of high-quality heuristic search methods can lead to state-of-the-art ML MAPF performance. However, we find that, at least with our model architecture, simple large scale (700k examples with hundreds of agents per example) imitation learning does not produce impressive results. Instead, we find that by using prior work that post-processes MAPF model predictions to resolve 1-step collisions (CS-PIBT), we can train a simple ML MAPF model in minutes that dramatically outperforms existing ML MAPF policies. This has serious implications for all future ML MAPF policies (with local communication) which currently struggle to scale. In particular, this finding implies that future learnt policies should (1) always use smart 1-step collision shields (e.g. CS-PIBT), (2) always include the collision shield with greedy actions as a baseline (e.g. PIBT) and (3) motivates future models to focus on longer horizon / more complex planning as 1-step collisions can be efficiently resolved.

Using Large Scale Imitation Learning for MAPF

data

From a diverse sets of map, we can use existing strong (centralized) heuristic search solvers to solve instances with hundreds of agents. This allows us to easily collect thousands of MAPF solutions. Each solution contains a sequence of timesteps, where each timestep contains 10s-100s of agents with their next action label.

We can then follow existing work and use a Graph Neural Network (GNN) structure where each agent is a "vertex", and agents that are within their local vield of view (FoV) and can communicate with each other are "edges". The GNN is trained to predict the action label of each agent.

Results using Naive Collision Shielding

data

Increasing the amount of data for imitation learning increases model performance. With a large amount of data, it is able to outperform existing EPH (prior state-of-the-art) method. However, it still struggles in congested regions.

Results using CS-PIBT

data

Using CS-PIBT is a game changer. We can train a model in minutes that outperforms existing ML MAPF policies, which implies that one-step collisions are a major bottleneck for existing ML MAPF policies. This has serious implications for all future ML MAPF policies (with local communication) which currently struggle to scale.

However, one caveat is that with CS-PIBT, the model does not need to learn something useful as CS-PIBT could do all the heavy lifting of resolving collisions. Thus, it is important to compare against a baseline that uses CS-PIBT with greedy actions (e.g., PIBT). Our comparison shows that our model with CS-PIBT does not convincingly outperform PIBT.

See our paper for more details!

BibTeX

@article{veerapaneni2024worksmarterhardersimple,
      title={Work Smarter Not Harder: Simple Imitation Learning with CS-PIBT Outperforms Large Scale Imitation Learning for MAPF}, 
      author={Rishi Veerapaneni and Arthur Jakobsson and Kevin Ren and Samuel Kim and Jiaoyang Li and Maxim Likhachev},
      year={2024},
      eprint={2409.14491},
      archivePrefix={arXiv},
      primaryClass={cs.MA},
      url={https://arxiv.org/abs/2409.14491}, 
}