๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page

Was this helpful?

  1. Abstract & Contents

Abstract

๋“œ๋ก  ์ปจํŠธ๋กค, ํ™”๋ฌผ ์šด์†ก๊ณผ ๊ฐ™์€ ์„ธ์ƒ์˜ ๋งŽ์€ ๋ฌธ์ œ๋“ค์€, ๋ถ€๋ถ„ ๊ด€์ธก ๊ฐ€๋Šฅํ•œ(POMDP : Particially Observable Markov Decision Process) ์ƒํ™ฉ์—์„œ์˜ Multi-Agent ํ™˜๊ฒฝ์— ๋†“์—ฌ์žˆ์Šต๋‹ˆ๋‹ค. ๋”์šฑ์ด, ๋” ๋งŽ์€ ๋จธ์‹ ๋Ÿฌ๋‹ ์‹œ์Šคํ…œ์ด ์‹ค์ œ ์ƒํ™ฉ์— ์ ์šฉ๋จ์—๋”ฐ๋ผ, agent๋Š” ์„œ๋กœ์—๊ฒŒ ์˜ํ–ฅ์„ ๋ฏธ์น˜๊ธฐ ์‹œ์ž‘ํ•˜๊ณ  ์ด๋ฅผ multi agent๋กœ ๋ฌธ์ œ๋ฅผ ์ •์˜ํ•˜๋Š” ๊ฒƒ์— ๋Œ€ํ•œ ์ค‘์š”์„ฑ์ด ์ปค์ ธ๋งŒ ๊ฐ€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋ฒˆ์—ญ๋ณธ์—์„œ๋Š” ์•„๋ž˜์„œ ์„ค๋ช…ํ•˜๋Š” ์ƒํ™ฉ๋“ค์— ๋Œ€ํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ Deep Multi-Agent Reinforcement Learning(DMARL)์˜ method๋“ค์„ ์ฃผ๋กœ ๋ฐฐ์šฐ๊ฒŒ ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค.

์—ฌ๊ธฐ์„œ ์ฃผ๋กœ ๋‹ค๋ฃฐ ๋ฌธ์ œ๋Š” ํ˜‘๋ ฅํ•˜๋Š”(Collaborate) ๋ฌธ์ œ , ์†Œํ†ตํ•˜๋Š”(Communicate) ๋ฌธ์ œ, ์ƒํ˜ธ๊ฐ„์˜ ์˜ํ–ฅ์„ ์ฃผ๋Š”(Reciprocate) ๋ฌธ์ œ๋กœ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ชจ๋“  ๋ฌธ์ œ์—์„œ ๊ณตํ†ต์ ์œผ๋กœ ์“ฐ์ด๋Š” ํ…Œํฌ๋‹‰์œผ๋กœ๋Š” centralized training, decentralized execution์ž…๋‹ˆ๋‹ค. ํ•™์Šต์ค‘์—” ๋ชจ๋“  state๋ฅผ ๋ณผ ์ˆ˜์žˆ๋Š” critic์ด agent๋ฅผ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋„๋ก ๋•๊ณ , ๊ฒฐ๊ณผ์ ์œผ๋กœ ๋‚˜์˜ค๋Š” policy๋Š” agent ๊ฐœ๋ณ„์˜ ํ–‰๋™๊ณผ ์ง€์—ญ์ ์ธ ๊ด€์ฐฐ์œผ๋กœ๋„ ์ถฉ๋ถ„ํžˆ ์ƒํ™ฉ์„ ์ดํ•ดํ•˜๊ณ  ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•™์Šตํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด, agent๊ฐ€ ํ•™์Šต์ค‘์—์„œ๋Š” ์ž์‹ ์˜ ๊ด€์ฐฐ์™ธ์—๋„ ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ๋‚ด์—์„œ ์ถ”๊ฐ€์ ์ธ state ์ •๋ณด๋ฅผ ์ฃผ๊ฑฐ๋‚˜, agent๊ฐ„์˜ communication์„ ํ•˜๋„๋ก ๋•๋Š” ํ•™์Šต ๋ฐฉ๋ฒ•์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋งŽ์€ ์ƒํ™ฉ์—์„œ ์ ์šฉ ๊ฐ€๋Šฅํ•˜๋ฉด์„œ agent์˜ ์„ฑ๋Šฅ์„ ๋†’์—ฌ์ค„ ์ˆ˜ ์žˆ๋Š” ์ข‹์€ ๋ฐฉ๋ฒ•์ค‘์— ํ•˜๋‚˜๋กœ, ํ˜„์žฌ ๋งŽ์€ MARL method๋“ค์ด ์‚ฌ์šฉํ•˜๊ณ  ์žˆ๋Š” ํ…Œํฌ๋‹‰์ž…๋‹ˆ๋‹ค.

chapter 3์—์„œ๋Š” collaborate ์ƒํ™ฉ์—์„œ์˜ common objective๋ฅผ ๋‹ฌ์„ฑํ•˜๊ธฐ ์œ„ํ•œ ๋ฌธ์ œ๋“ค์— ๋Œ€ํ•ด ๊ธฐ์ˆ ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ์˜ ์–ด๋ ค์›€ ์ค‘ ํ•˜๋‚˜๋Š” multi-agent ์ƒํ™ฉ์—์„œ ์–ด๋–ค agent์˜ ํ–‰๋™์ด reward์— ์ง์ ‘ ์˜ํ–ฅ์„ ๋ฏธ์ณค๋Š”์ง€ ์ž…๋‹ˆ๋‹ค(multi-agent credit assignment). ๋ชจ๋“  agent๋“ค์˜ action์€ episode๋‚ด์—์„œ reward์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๊ธฐ ๋•Œ๋ฌธ์—, ํ•œ agent์˜ ํ–‰๋™์— ๋Œ€ํ•œ ํ‰๊ฐ€๋ฅผ ๋ถ„๋ฆฌํ•ด์„œ ํ•ด๋‚ด๊ธฐ๊ฐ€ ์–ด๋ ค์›€์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์ด๋Ÿฐ ๋ฌธ์ œ๋ฅผ ํ’€๊ธฐ ์œ„ํ•ดCounterfactual Multi-Agent Policy Gradients(COMA) ๋ฅผ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค. COMA์—์„œ๋Š” Counterfactual baseline ์„ ํ†ตํ•ด ๊ฐ agent์˜ action์ด ํŒ€๋‚ด์—์„œ ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์— ๋Œ€ํ•ด ํ‰๊ฐ€ํ•ฉ๋‹ˆ๋‹ค.

chapter 4์—์„œ๋Š” agent์‚ฌ์ด์—์„œ์˜ common knowledge์— ๋Œ€ํ•œ ์ค‘์š”๋„์— ๋Œ€ํ•ด์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ด๋ก ์œผ๋กœ ์ •๋ฆฌํ•˜์—ฌ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค. Multi-Agent Common Knowledge Reinforcement Learning(MACKRL)๋Š” agent๋“ค์˜ subgroup๋“ค์ด ์„œ๋กœ ๊ฐ™์€ common knowledge๋ฅผ ๊ณต์œ ํ•˜๋Š” ๊ณ„์ธต์ ์ธ controllers๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋Š” ์ด์œ ๋Š” ๊ทธ๋ฃน์˜ action์ด joint๋œ space๋ฅผ ๊ฐ€์ง€๊ฑฐ๋‚˜ ๋งŽ์€ common knowledge๋ฅผ ๊ฐ€์ง„ subgroup์—๊ฒŒ ๊ธฐ๋Šฅ์„ ์œ„์ž„ํ•˜๊ธฐ ์œ„ํ•ด์„œ์ž…๋‹ˆ๋‹ค.

chapter 5์—์„œ๋Š” MARL ์ƒํ™ฉ์—์„œ๋Š” ๊ฐ agent๊ฐ€ action์„ ์ทจํ•˜๋Š” ํ–‰๋™์ด environment๋ฅผ non-stationaryํ•˜๊ฒŒ ๋งŒ๋“ค์–ด replay buffer๋ฅผ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ํ•™์Šตํ•˜๊ธฐ ์–ด๋ ต๊ฒŒ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์ด ๋•Œ ์–ด๋–ป๊ฒŒ replay buffer๋ฅผ ์ด์šฉํ•  ์ˆ˜ ์žˆ์„์ง€์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค.

part 1(chapter 3~5)๊นŒ์ง„ agent๋“ค์ด ๋ชจ๋‘ ์„œ๋กœ ์†Œํ†ต์ด ์—†์ด decentralized ๋˜์–ด์„œ action์„ ์ทจํ•˜๋Š” ์ƒํ™ฉ์— ๋Œ€ํ•ด ๊ฐ€์ •ํ–ˆ๋Š”๋ฐ, part 2(chapter 6~7)์—์„œ๋Š” agent๊ฐ€ communication protocol์„ ๋ฐฐ์šธ ์ˆ˜ ์žˆ๋Š” ์„ธ๊ฐ€์ง€ ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค.

์ฒซ๋ฒˆ์งธ ๋ฐฉ๋ฒ•์œผ๋กœ๋Š” Reinforced Inter-Agent Learning(RIAL)๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” environment์— ์˜ํ–ฅ์„ ์ฃผ์ง€ ์•Š๋Š” message๋ฅผ agent๋ผ๋ฆฌ ์ฃผ๊ณ ๋ฐ›๋Š” ๋ฐฉ์‹์œผ๋กœ communication์ด ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค.

๋‘๋ฒˆ์งธ ๋ฐฉ๋ฒ•์œผ๋กœ๋Š” Differentiable Inter-Agent Learning(DIAL)์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋„ message๋ฅผ ์‚ฌ์šฉํ•˜์ง€๋งŒ, RIAL๋Š” message๋ฅผ optimization term์— ๋„ฃ์–ด RIAL๋ณด๋‹ค ์„ฌ์„ธํ•˜๊ฒŒ communication protocol์„ ๋ฐฐ์šธ ์ˆ˜ ์žˆ๋„๋กํ•ฉ๋‹ˆ๋‹ค.

์„ธ๋ฒˆ์งธ ๋ฐฉ๋ฒ•์œผ๋กœ๋Š” Baysian Action Decoder(BAD)๋ฅผ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” agent์˜ environment์— ์˜ํ–ฅ์„ ์ฃผ๋Š” action ์ž์ฒด๋ฅผ communication ๋ฐฉ๋ฒ•์œผ๋กœ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ๋กœ ๊ฐ๊ฐ์˜ agent๊ฐ€ ๊ด€์ฐฐํ•œ ๋ถˆ์™„์ „ํ•œ ์ •๋ณด์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ์–ด๋–ป๊ฒŒ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ์„์ง€์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค.

์œ„์˜ part 1๊ณผ part 2์—์„œ๋Š” ๋ชจ๋“  agents๊ฐ€ team reward๋ฅผ ์ตœ์ ํ™”ํ–ˆ์ง€๋งŒ general-sum(win-winํ˜น์€ lose-lose๋„ ๊ฐ€๋Šฅํ•œ)๊ฒฝ์šฐ์— ๋Œ€ํ•ด part 3์—์„œ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค. ๋˜, ๊ทธ๋ฅผ ํ•ด๊ฒฐํ•  Learning with Opponents-Learning Awareness(LOLA)๋ผ๋Š” method๋ฅผ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค. LOLA์—์„œ agent๋Š” ์ž์‹ ์˜ optimization term์— ์ƒ๋Œ€์˜ policy์˜ ๋ณ€ํ™”๋ฅผ ๊ณ ๋ คํ•ฉ๋‹ˆ๋‹ค. defact-defact ๊ท ํ˜•์„ ์ด๋ฃจ๋Š” ์ฃ„์ˆ˜์˜ ๋”œ๋ ˆ๋งˆ๋ณด๋‹ค, LOLA๋Š” tit-for-tat์˜ ์ „๋žต์„ ํ˜•์„ฑํ•ฉ๋‹ˆ๋‹ค. LOLA๋Š” ํšจ๊ณผ์ ์œผ๋กœ ์ƒํ˜ธ์ž‘์šฉ์„ ํ•˜๋ฉด์„œ, ์ „์ฒด์ ์œผ๋กœ ๋†’์€ reward๋ฅผ ๋ฐ›๋Š”๋ฐ ์ง‘์ค‘ํ•ฉ๋‹ˆ๋‹ค.

LOLA์—์„œ ์ƒ๋Œ€๋ฐฉ์˜ policy๋ฅผ ๊ทผ์‚ฌํ•ด์•ผํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋†’์€ ์ฐจ์ˆ˜์˜ gradient๊ฐ€ ๋ฐœ์ƒํ•˜๋Š”๋ฐ ์ด๋ฅผ ์ข€๋” ์ •ํ™•ํžˆ ๊ทผ์‚ฌํ•˜๊ธฐ ์œ„ํ•ด Infinitely Differentiable Monte-Carlo estimator(DiCE)๋ฅผ ์†Œ๊ฐœํ•˜๋Š”๋ฐ, ์ด๋Š” ๋†’์€ ์ฐจ์ˆ˜์˜ ์ •ํ™•ํ•œ gradients๋ฅผ ์ถ”์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ LOLA์— ์ ์šฉ๋˜์—ˆ์„ ๋•Œ ์„ฑ๋Šฅ์„ ๊ฐœ์„ ์‹œํ‚ค๋Š” ๊ฒƒ์„ ๋ณด์˜€์Šต๋‹ˆ๋‹ค.

PreviousDeep Multi-Agent Reinforcement LearningNext1. INTRODUCTION

Last updated 4 years ago

Was this helpful?