Poster
in
Workshop: Bayesian Deep Learning
Latent Goal Allocation for Multi-Agent Goal-Conditioned Self-Supervised Learning
Laixi Shi · Peide Huang · Rui Chen
Multi-agent learning plays an essential role in ubiquitous practical applications including game theory, autonomous driving, and etc. On the other end, goal-conditioned learning attracts a surge of interests with the capability of solving a rich variety of tasks and configurations. Nevertheless, the scenarios that combine both multi-agent and goal-conditioned settings have not been considered previously, attributed to the daunting challenges of both areas. In this work, we target \textbf{{\em multi-agent goal-conditioned tasks}}, with the objective of learning a universal policy for multiple agents to reach a set of sub-goals. This task necessitates the agents to execute differently conditioned on the assigned sub-goal. In various scenarios, considering it is infeasible to have access to direct rewards of actions and sub-goal assignment labels for each agent, we resort to imitation learning using only demonstrations of experts, without the need of a reward and sub-goal assignment labels. Regarding this, we propose a probabilistic graphical model, named Latent Goal Allocation (LGA), which explicitly promotes the sub-goal assignment as a latent variable to generate the corresponding action for each agent. We conduct experiments to show that the proposed LGA outperforms existing baselines with interpretable sub-goal assignment processes.