๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page

Was this helpful?

  1. III Learning to Reciprocate
  2. 8. Learning with Opponent-Learning Awareness

8.2 Related Work

general-sum game์— ๋Œ€ํ•œ ์—ฐ๊ตฌ๋Š” ๊ฒŒ์ž„์ด๋ก ๊ณผ ์ง„ํ™” ์—ฐ๊ตฌ์—์„œ ๋งŽ์ด ์ด๋ฃจ์–ด์กŒ์Šต๋‹ˆ๋‹ค. ๋งŽ์€ ๋…ผ๋ฌธ์—์„œ IPD๋ฅผ ํ•ด๊ฒฐํ–ˆ๋Š”๋ฐ, ํŠนํžˆ Axelrod์˜ ์—ฐ๊ตฌ์— ์ฃผ๋ชฉํ•ด๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ด ์—ฐ๊ตฌ๋Š” tit-for-tat์˜ ๋Œ€์ค‘ํ™”๋ฅผ ์ด๋Œ์—ˆ๋Š”๋ฐ, ์ด๋Š” ํšจ๊ณผ์ ์ด๋ฉด์„œ๋„ ๊ฐ„๋‹จํ•œ ์ „๋žต์œผ๋กœ agent๊ฐ€ ์ฒ˜์Œ์—” ํ˜‘๋ ฅ์ ์œผ๋กœ ํ–‰๋™ํ•˜๊ณ , ์ดํ›„์—๋Š” opponent์˜ ์ตœ๊ทผ ํ–‰๋™์„ ๋”ฐ๋ผํ•˜๋Š” ์ „๋žต์ž…๋‹ˆ๋‹ค.

๋งŽ์€ MARL ์—ฐ๊ตฌ๋Š” agent ์Šค์Šค๋กœ ํ•™์Šตํ•ด ์ˆ˜๋ ดํ•˜๊ณ , ์ˆœ์ฐจ์ ์ธ general sum game์—์„œ ํ•ฉ๋ฆฌ์„ฑ์„ ์–ป๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋ฅผ ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ์—ฐ๊ตฌ์—๋Š” WoLF algorithm, joint-action-learner(JAL)๊ณผ AWESOME์ด ์žˆ์Šต๋‹ˆ๋‹ค. LOLA์™€๋Š” ๋‹ค๋ฅด๊ฒŒ ์ด๋Ÿฐ algorithms์€ ์ฃผ์–ด์ง„ ์ œ์•ฝ์กฐ๊ฑด๋“ค์— ๋Œ€ํ•ด ์ˆ˜๋ ดํ•˜๋Š” ํ–‰๋™์„ ์ž˜ ์ดํ•ดํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Ÿฐ algorithm์€ ์ „์ฒด์ ์œผ๋กœ ๋” ๋†’์€ reward์— ์ˆ˜๋ ดํ•˜๊ธฐ ์œ„ํ•ด์„œ opponent์˜ ํ•™์Šตํ•˜๋Š” ํ–‰๋™์— ๋Œ€ํ•ด ์•Œ์•„๋‚ด๋Š” ๋Šฅ๋ ฅ์ด ์—†์Šต๋‹ˆ๋‹ค. WoLF๋Š” agent๊ฐ€ ์ด๊ธฐ๊ณ  ์ง€๋Š” ๊ฒƒ์— learning rate๋ฅผ ๋‹ค๋ฅด๊ฒŒ ํ•˜์—ฌ ํ•™์Šต์„ ์ง„ํ–‰ํ•ฉ๋‹ˆ๋‹ค. AWESOME์€ iterated game์˜ ์ผ๋ถ€๋ถ„์ธ ํ•œ๋ฒˆ์— ๋๋‚˜๋Š” game์— ๋Œ€ํ•ด ๋ฐฐ์šฐ๊ธฐ ์œ„ํ•˜๋Š” ๊ฒƒ์— ๋ชฉํ‘œ๋ฅผ ๋‘ก๋‹ˆ๋‹ค. general-sum์ƒํ™ฉ์—์„œ JAL์˜ dynamics๋ฅผ ๋ถ„์„ํ•˜๊ธฐ ์œ„ํ•œ ์—ฐ๊ตฌ๋“ค๋กœ Uther์˜ zero-sum ์ƒํ™ฉ์—์„œ์˜ ์—ฐ๊ตฌ์™€ Claus์˜ cooperative ์ƒํ™ฉ์—์„œ์˜ ์—ฐ๊ตฌ๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. Sandholm์€ IPD์—์„œ ๋‹ค์–‘ํ•œ exploration์ „๋žต์„ ๊ฐ€์ง€๊ณ  function approximator๋ฅผ ๊ฐ€์ง„ IQL์— ๋Œ€ํ•ด ์—ฐ๊ตฌํ•˜์˜€์Šต๋‹ˆ๋‹ค. Wunder์™€ Zinkevich๋Š” iterated game์—์„œ dynamics์˜ ์ˆ˜๋ ด๊ณผ ํ•™์Šต์˜ ํ‰ํ˜•์ƒํƒœ์— ๋Œ€ํ•ด ์—ฐ๊ตฌํ–ˆ์œผ๋‚˜ LOLA์™€ ๋‹ค๋ฅด๊ฒŒ ํ•™์Šตํ•˜๋Š” ์ „๋žต์— ๋Œ€ํ•ด ์ œ์‹œํ•˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค.

Littman์€ ๊ฐ opponent๋ฅผ fully cooperative ํ˜น์€ fully adversarialํ•˜๊ฒŒ ๊ฐ€์ •ํ•˜๊ณ  ํ•ด๊ฒฐํ•˜์˜€๋Š”๋ฐ, LOLA๋Š” ์ด๋ฅผ ๋‹จ์ง€ general-sum game์ž„๋งŒ์„ ๊ณ ๋ คํ•ด์„œ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

Chakraborty๋Š” policy๋ฅผ ์—ฌ๋Ÿฌ๊ฐœ ๋‘๊ณ  ์ตœ์ ์˜ ๋ฐ˜์‘์— ๋Œ€ํ•ด ๋ฐฐ์šฐ๋Š”๋ฐ LOLA๋Š” ํ•˜๋‚˜์˜ policy๋กœ ํ•ด๊ฒฐํ•˜์˜€์Šต๋‹ˆ๋‹ค.

Brafman์˜ ์—ฐ๊ตฌ์—์„  efficient learning equilibrium(ELE)๋ผ๋Š” ๊ฐœ๋…์„ ์†Œ๊ฐœํ•˜๋Š”๋ฐ, ์ด algorithm์—์„œ๋Š” ๋ชจ๋“  ๋‚ด์‰ฌ๊ท ํ˜•์ด ๊ณ„์‚ฐ๋˜์–ด์•ผํ•ฉ๋‹ˆ๋‹ค. LOLA์—์„œ๋Š” ๊ทธ๋Ÿฐ ๊ฐ€์ •์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

DMARL์—์„  ์ฃผ๋กœ fully cooperative๋‚˜ zero-sum ํ™˜๊ฒฝ๊ณผ(์ด๋“ค์˜ reward๋Š” ์ธก์ •ํ•˜๊ธฐ ์‰ฌ์šดํŽธ) communication์ด ํ•„์š”ํ•œ ์ƒํ™ฉ์— ๋Œ€ํ•ด ๋งŽ์€ ์—ฐ๊ตฌ๊ฐ€ ์ด๋ฃจ์–ด์กŒ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ Leibo์˜ ์—ฐ๊ตฌ๋Š” partially observable, general sum ์ƒํ™ฉ์—์„œ naive learning์— ๋Œ€ํ•ด ์—ฐ๊ตฌํ•˜์˜€๊ณ , Lowe๋„ general sum ์ƒํ™ฉ์— ๋Œ€ํ•œ centralized actor-critic architecture๋ฅผ ์ œ์•ˆํ•˜์˜€์Šต๋‹ˆ๋‹ค. ์ด ๋‘ ์—ฐ๊ตฌ์—์„œ๋Š” ๋‹ค๋ฅธ agent์˜ ํ•™์Šต ํ–‰๋™์— ๋Œ€ํ•œ ์ถ”๋ก ์„ ํ•  ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•˜์ง€ ๋ชปํ–ˆ์Šต๋‹ˆ๋‹ค. Lanctot์€ NFSP๊ฐ™์€ game-theoretic best-response-style algorithm์˜ ์•„์ด๋””์–ด๋ฅผ ์ผ๋ฐ˜ํ™”ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ฃผ์–ด์ง„ opponent์˜ policies์— ๋Œ€ํ•œ set์ด ํ•„์š”ํ•˜์ง€๋งŒ LOLA๋Š” opponent์˜ ํ•™์Šต์— ๋Œ€ํ•ด ์–ด๋–ค ๊ฐ€์ •๋„ ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

Lerer์˜ ์—ฐ๊ตฌ๊ฐ€ ๊ฐ€์žฅ LOLA๊ณผ ๋น„์Šทํ•œ๋ฐ, ์ด๋Š” tit-for-tat์„ DMARL๋ฅผ ํ†ตํ•ด ์ผ๋ฐ˜ํ™”ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ์ด ์ €์ž๋Š” agent ๋ชจ๋‘ fully cooperative์™€ defecting ํ•˜๋Š” policy๋ฅผ ๋ฐฐ์šฐ๋ฉฐ, ์ด๋ฅผ ๋ฐ”๊ฟ”๊ฐ€๋ฉฐ ํ•™์Šตํ•ด tit-for-tat ์ „๋žต์„ ์ˆ˜๋ฆฝํ•˜๋„๋ก ํ•ฉ๋‹ˆ๋‹ค. ์ด ์—ฐ๊ตฌ์™€ ๋น„์Šทํ•˜๊ฒŒ Munoz๋„ repeated stochastic game์—์„œ competitive์™€ cooperative๋ฅผ ๋ฐ”๊ฟ”๊ฐ€๋ฉฐ egalitarian equilibrium์„ ์ฐพ๋Š” ๋‚ด์‰ฌ ๊ท ํ˜• ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ–ˆ์Šต๋‹ˆ๋‹ค. ๋น„์Šทํ•œ ์•„์ด๋””์–ด๋กœ M-Qubed์—์„œ๋Š” ์ตœ์ ์˜ ๋ฐ˜์‘, ์‹ ์ค‘ํ•œ ๋ฐ˜์‘, ๊ทธ๋ฆฌ๊ณ  optimistic learning biases์˜ ๊ท ํ˜•์„ ๋งž์ถฅ๋‹ˆ๋‹ค. ์ด๋Ÿฐ algorithm๋“ค์€ ์ƒํ˜ธ ์ž‘์šฉ์ด๋‚˜ ํ˜‘๋ ฅ์ด algorithm๋‚ด์—์„œ ๋ฐœ์ƒํ•˜์ง€ ์•Š๊ณ , heuristicํ•˜๊ฒŒ ๋ฐœ์ƒ๋˜๋Š”๋ฐ, ์ด๋Š” ์ด๋Ÿฐ algorithm๋“ค์˜ ์ผ๋ฐ˜ํ™”์— ํฐ ์ œ์•ฝ์„ ์ค๋‹ˆ๋‹ค.

opponent modeling์™€ ์—ฐ๊ด€๋œ ์—ฐ๊ตฌ๋Š” fictitious play์™€ action-sequence prediction์ด ์žˆ์Šต๋‹ˆ๋‹ค. Meanling์€ memory๋ฅผ ์ด์šฉํ•ด opponent์˜ future action์„ ์˜ˆ์ธกํ•ด policy๋ฅผ ์ฐพ๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•˜์˜€์Šต๋‹ˆ๋‹ค. ๊ฒŒ๋‹ค๊ฐ€ Hernandez-Leal์€ ์ƒ๋Œ€์˜ ์ง์ ‘์ ์œผ๋กœ ์ƒ๋Œ€์˜ distribution์— ๋Œ€ํ•ด modeling์„ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Ÿฐ ๋ฐฉ๋ฒ•๋“ค์ด opponent์— ๋Œ€ํ•œ ์ „๋žต์„ modelingํ•˜๊ณ  ์ตœ์ ์˜ ๋ฐ˜์‘์— ๋Œ€ํ•œ policy๋ฅผ ์ฐพ๋Š”๋ฐ ์ง‘์ค‘ํ•œ ๋ฐ˜๋ฉด, opponent์˜ ํ•™์Šต์— ๋Œ€ํ•œ dynamic์„ ๋ฐฐ์šฐ๋Š”๋ฐ ๊นŒ์ง€๋Š” ํ•ด๊ฒฐํ•˜์ง€ ๋ชปํ–ˆ์Šต๋‹ˆ๋‹ค.

๋ฐ˜๋ฉด์— Zhang์˜ ์—ฐ๊ตฌ์—์„œ๋Š” one-step learning dynamics์— ๋Œ€ํ•œ policy prediction์„ ์‚ฌ์šฉํ•˜๋Š”๋ฐ, ์ด๋Š” opponent์˜ policy update๊ฐ€ ์ฃผ์–ด์ง„๋‹ค๊ณ  ๊ฐ€์ •ํ•˜๊ณ , ๊ทธ์— ๋งž๋Š” ์ตœ์ ์— ์„ ํƒ์„ ๋ฐฐ์›๋‹ˆ๋‹ค. LOLA๋Š” ์ด์™€ ๋‹ค๋ฅด๊ฒŒ ์ง์ ‘์ ์œผ๋กœ opponent์˜ policy์˜ ํ•™์Šต์„ ๋“œ๋Ÿฌ๋‚ด๊ณ , ์ž์‹ ์˜ reward๋ฅผ ์ตœ์ ํ™”ํ•  ๋•Œ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. LOLA์—์„œ ์œ ์ผํ•˜๊ฒŒ ์‚ฌ์šฉ๋œ opponent์˜ learning step์„ ๋ฏธ๋ถ„ํ•˜๋Š” ๊ฒƒ์€ ์ด๋Ÿฌํ•œ ์ƒํ˜ธ ํ˜‘๋ ฅ ํ˜น์€ tit-for-tat์˜ ๋“ฑ์žฅ์— ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” DMARL์—์„œ ์ตœ์ดˆ๋กœ ์‹œ๋„ํ•˜๋Š” ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค.

LOLA๋Š” ์ƒ๋Œ€๋ฐฉ์˜ policy update๋ฅผ ๋ฏธ๋ถ„ํ•˜์—ฌ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” Metz๊ฐ€ ์ œ์•ˆํ•œ ์•„์ด๋””์–ด์™€ ๋น„์Šทํ•˜๊ธดํ•œ๋ฐ, ์ด๋Š” GAN์„ ํ•™์Šต์‹œํ‚จ ๋ฐฉ๋ฒ•์œผ๋กœ, ์ „์ฒด์ ์ธ ํšจ๊ณผ๋Š” ๋น„์Šทํ•ฉ๋‹ˆ๋‹ค. opponent์˜ ํ•™์Šต ํ”„๋กœ์„ธ์Šค๋ฅผ ๋ฏธ๋ถ„ํ•˜๋Š” ๊ฒƒ์€ ์ „์ฒด์ ์ธ zero-sum game์˜ ํ•™์Šต์„ ์•ˆ์ •ํ™”ํ•ฉ๋‹ˆ๋‹ค.

Previous8.1 IntroductionNext8.3 Methods

Last updated 4 years ago

Was this helpful?