Poster
Credit Assignment For Collective Multiagent RL With Global Rewards
Duc Thien Nguyen · Akshat Kumar · Hoong Chuin Lau
Room 517 AB #160
Keywords: [ Multi-Agent RL ] [ Markov Decision Processes ] [ Planning ]
Scaling decision theoretic planning to large multiagent systems is challenging due to uncertainty and partial observability in the environment. We focus on a multiagent planning model subclass, relevant to urban settings, where agent interactions are dependent on their ``collective influence'' on each other, rather than their identities. Unlike previous work, we address a general setting where system reward is not decomposable among agents. We develop collective actor-critic RL approaches for this setting, and address the problem of multiagent credit assignment, and computing low variance policy gradient estimates that result in faster convergence to high quality solutions. We also develop difference rewards based credit assignment methods for the collective setting. Empirically our new approaches provide significantly better solutions than previous methods in the presence of global rewards on two real world problems modeling taxi fleet optimization and multiagent patrolling, and a synthetic grid navigation domain.
Live content is unavailable. Log in and register to view live content