A Very Condensed Survey and Critique of Multiagent Deep Reinforcement Learning
Full Text: p2146.pdfDeep reinforcement learning (RL) has achieved outstanding results in recent years. This has led to a dramatic increase in the number of applications and methods. Recent works have explored learning beyond single-agent scenarios and have considered multiagent learning (MAL) scenarios. Initial results report successes in complex multiagent domains, although there are several challenges to be addressed. The primary goal of this extended abstract is to provide a broad overview of current multiagent deep reinforcement learning (MDRL) literature, hopefully motivating the reader to review our 47- page JAAMAS survey article [28]. Additionally, we complement the overview with a broader analysis: (i) We revisit previous key components, originally presented in MAL and RL, and highlight how they have been adapted to multiagent deep reinforcement learning settings. (ii) We provide general guidelines to new practitioners in the area: describing lessons learned from MDRL works, pointing to recent benchmarks, and outlining open avenues of research. (iii) We take a more critical tone raising practical challenges of MDRL
Citation
P. Leal, B. Kartal, M. Taylor. "A Very Condensed Survey and Critique of Multiagent Deep Reinforcement Learning". Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pp 2146-2148, May 2020.Keywords: | Multiagent learning, reinforcement learning, survey |
Category: | In Conference |
BibTeX
@incollection{Leal+al:AAMAS20, author = {Pablo Francisco Hernandez Leal and Bilal Kartal and Matthew E. Taylor}, title = {A Very Condensed Survey and Critique of Multiagent Deep Reinforcement Learning}, Pages = {2146-2148}, booktitle = {Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS)}, year = 2020, }Last Updated: January 19, 2021
Submitted by Sabina P