๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page
  • Architecture
  • Complexity
  • Experimental Result

Was this helpful?

  1. II Learning to Communicate
  2. 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
  3. 6.6 Experiments

6.6.2 Switch Riddle

Previous6.6.1 Model ArchitectureNext6.6.3 MNIST Games

Last updated 4 years ago

Was this helpful?

์ฒซ experiment๋กœ ์˜๊ฐ์„ ๋ฐ›์€ ๊ฒƒ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

" 100๋ช…์˜ ์ฃ„์ˆ˜๊ฐ€ ์ƒˆ๋กœ ๊ฐ์˜ฅ์œผ๋กœ ๋“ค์–ด์™”๋Š”๋ฐ, ์†Œ์žฅ์ด ๊ทธ๋“ค์—๊ฒŒ ๋‚ด์ผ๋ถ€ํ„ฐ ๋‹ค ๋…๋ฐฉ์— ๋“ค์–ด๊ฐˆ ๊ฒƒ์ด๊ณ , ์„œ๋กœ communication์€ ๋ถˆ๊ฐ€๋Šฅํ•  ๊ฒƒ์ด๋ผ๊ณ  ์•Œ๋ฆฝ๋‹ˆ๋‹ค. ์†Œ์žฅ์€ ๊ฐ ๋‚  ์ฃ„์ˆ˜๋ฅผ ๋žœ๋ค์œผ๋กœ ์ค‘์•™ ์‹ฌ๋ฌธ์‹ค๋กœ ๋ถ€๋ฅผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋•Œ ๋ฐฉ์—๋Š” ์ „๊ตฌ์™€ ์Šค์œ„์น˜๋งŒ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฃ„์ˆ˜๋Š” ํ˜„์žฌ ์ „๊ตฌ์˜ ์ƒํƒœ๋ฅผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๊ทธ๊ฐ€ ์›ํ•˜๋ฉด ๋ถˆ์„ ์ผœ๊ฑฐ๋‚˜ ๋Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ฐ€๋งŒํžˆ๋„ ์žˆ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋•Œ ์ฃ„์ˆ˜๋“ค์€ ์–ด๋А๋‚  ๋ชจ๋“  ์ฃ„์ˆ˜๋“ค์ด ์ด ๋ฐฉ์— ํ•œ ๋ฒˆ์”ฉ์€ ๋“ค์–ด์™”๋Š”์ง€ ์•Œ์•„๋‚ด์•ผํ•ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๋งž์ถ˜๋‹ค๋ฉด, ๋ชจ๋“  ์ฃ„์ˆ˜๋Š” ํ’€์–ด์ง€์ง€๋งŒ, ์•„๋‹ˆ๋ฉด ๋ชจ๋‘ ์‚ฌํ˜•์ž…๋‹ˆ๋‹ค. ๋‚ด์ผ ๊นŒ์ง€ ์ฃ„์ˆ˜๋“ค๋ผ๋ฆฌ ๊ทธ๋“ค์ด ์–ด๋–ค ๋ฃฐ์„ ์„ธ์›Œ์•ผ ๊ทธ๋“ค์ด ์™„์ „ํžˆ ๋ชจ๋‘ ๋‹ค ์‹ฌ๋ฌธ์‹ค์— ๋“ค์–ด๊ฐ”์Œ์„ ์•Œ ์ˆ˜ ์žˆ์„๊นŒ"์— ๋Œ€ํ•œ ๋ฌธ์ œ ์ž…๋‹ˆ๋‹ค.

Architecture

์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด์„œ ์ด ๊ณณ์—์„œ๋Š” ๋ฌธ์ œ ์ •์˜๋ฅผ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ง„ํ–‰ํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ฐ time-step t์— ๋Œ€ํ•ด ์‹ฌ๋ฌธ์‹ค์— ๋“ค์–ด๊ฐ„ agent a๋Š” otaโˆˆ0,1 o^a_t \in 0,1otaโ€‹โˆˆ0,1๋ฅผ ๊ด€์ฐฐํ•  ์ˆ˜ ์žˆ๊ณ , message mtam^a_tmtaโ€‹๋ฅผ ๋‚จ๊ฒจ๋†“๊ณ  ๊ฐˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ํ™˜๊ฒฝ์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” ํ–‰๋™์œผ๋กœ utaโˆˆ{None,Tell}u^a_t \in \{\mathrm{None},\mathrm{Tell}\} utaโ€‹โˆˆ{None,Tell}์„ ์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. reward๋Š” Tellํ•˜๊ธฐ์ „๊นŒ์ง€ 0์„ ๋ฐ›๊ณ , Tell์„ ํ•ด์„œ ๋งž์ท„๋‹ค๋ฉด 1, ํ‹€๋ฆฌ๋ฉด -1๋กœ ์ •์˜ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ์ตœ๋Œ€ Time-step์€ ๋„ˆ๋ฌด ๊ธธ์–ด์ง€๋ฉด ๋„ˆ๋ฌด ์‰ฌ์›Œ์ง€๋ฏ€๋กœ 4n-6๋กœ ์žก์•˜์Šต๋‹ˆ๋‹ค.

Complexity

์ด ๋ฌธ์ œ๋Š” protocol์—๋Œ€ํ•œ ํ•™์Šต์— ๋Œ€ํ•œ ๊ณผ์ œ๋ฅผ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค. ์–ด๋А time-step t์— ๋Œ€ํ•ด agent๋Š” observationโˆฃoโˆฃt|o|^tโˆฃoโˆฃt ๋ฅผ ๊ฐ€์ง€๋Š”๋ฐ, โˆฃoโˆฃ=3|o|=3โˆฃoโˆฃ=3์œผ๋กœ, ์ทจ์กฐ์‹ค์— ์•ˆ๋“ค์–ด๊ฐ€๊ฑฐ๋‚˜, ๋ถˆ์ด ๊บผ์ง„๊ฑธ ๋ณด๊ฑฐ๋‚˜ ๋ถˆ์ด ์ผœ์ง„ ๊ฒƒ์„ ๋ณด๋Š” ์„ธ ๊ฐ€์ง€ ์ƒํ™ฉ์„ observation๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  agent๋Š” history๋กœ๋ถ€ํ„ฐ โˆฃUโˆฃโˆฃMโˆฃ=4|U||M| = 4โˆฃUโˆฃโˆฃMโˆฃ=4๊ฐ€ ๊ฐ€๋Šฅํ•œ๋ฐ, ๊ทธ๋ ‡๊ธฐ ๋•Œ๋ฌธ์—, single-agent policy์ผ ๋•Œ์˜ space๋Š” (โˆฃUโˆฃโˆฃMโˆฃ)โˆฃoโˆฃt=43t(|U||M|)^{|o|^t} = 4^{3^t}(โˆฃUโˆฃโˆฃMโˆฃ)โˆฃoโˆฃt=43t(histories๋ฅผ ํ†ตํ•ด action์„ ๊ฐ€์ง€๊ธฐ๋•Œ๋ฌธ์— observation ๊ฐœ์ˆ˜ ๋งŒํผ์˜ โˆฃUโˆฃโˆฃMโˆฃ |U||M|โˆฃUโˆฃโˆฃMโˆฃ์„ ๊ฐ€์ง€๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.) ๋กœ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ตœ๋Œ€ time step๊นŒ์ง€์— ๋Œ€ํ•ด ๊ณ ๋ คํ•ด๋ณด๋ฉด, policy space๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

โˆ43t=4(3T+1โˆ’3)/2\prod{4^{3^t}} = 4^{(3^{T+1}-3)/2}โˆ43t=4(3T+1โˆ’3)/2

์ง€์ˆ˜ ๊ณฑ์˜ ํŠน์„ฑ์— ์˜ํ•ด ์ง€์ˆ˜๊ฐ€ ๋“ฑ๋น„ ์ˆ˜์—ด์˜ ํ•ฉ ํ˜•ํƒœ๋กœ ๋‚˜ํƒ€๋‚œ ๋ชจ์Šต์ž…๋‹ˆ๋‹ค.์ด๊ฒŒ ํ•œ agent์˜ action space์ธ๋ฐ, agent๊ฐ€ ๋‹ค์ˆ˜๋ผ๋ฉด, ์ด๋ฅผ ๊ทธ๋Œ€๋กœ ๊ณฑํ•˜๋Š” ๊ฒƒ๊ณผ ๊ฐ™์œผ๋ฏ€๋กœ ์ง€์ˆ˜์ ์œผ๋กœ ์ฆ๊ฐ€ํ•˜๋Š”๋ฐ, ์ด ๋ณต์žก๋„๋ฅผ Big-O๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ‘œํ˜„๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

4n3O(n) 4^{n3^{O(n)}}4n3O(n)

Experimental Result

์œ„ ๊ทธ๋ฆผ์˜ ๊ทธ๋ฆผ (a)๋Š” agent๊ฐ€ 3๋ช…์ด์—ˆ์„ ๋•Œ๋ฅผ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. 5k step ์ดํ›„ ๋ชจ๋“  method๊ฐ€ optimal policy๋ฅผ ์ฐพ์•˜์Šต๋‹ˆ๋‹ค. ์ด๋•Œ, parameter sharingํ•˜๋Š” DIAL์ด RIAL๋ณด๋‹ค ๋น ๋ฅด๊ฒŒ optimal์— ๋„๋‹ฌํ•˜๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋•Œ, ๋‘˜๋‹ค parameter sharing์ด ๋ชจ๋‘ ์†๋„๋ฅผ ์ฆ๊ฐ€์‹œํ‚ด์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค. (b)์—์„œ๋„ DIAL์ด ๋น ๋ฅด๊ฒŒ ์ˆ˜๋ ดํ•จ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ , RIAL์— parameter sharing์ด ์—†๋Š” ๊ฒƒ์€ ํ•™์Šตํ•˜์ง€ ์•Š์€ ๊ฒฐ๊ณผ์™€ ๋น„์Šทํ•จ์„ ๋ณด์•„ ํ•™์Šต์ด ์ด๋ฃจ์–ด์ง€์ง€์•Š๊ณ  ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  parameter sharing์ด communication์„ ํ•  ๋•Œ ๊ต‰์žฅํžˆ ํฐ ์—ญํ• ์„ ํ–ˆ๋Š”๋ฐ, ์ด๋Š” ๋ณด๋‚ด๊ณ  ๋ฐ›์•„๋“ค์ด๋Š” channel์˜ ์ •๋ณด๊ฐ€ ๋น„์Šทํ•ด์•ผ ํ•™์Šต์ด ์ž˜ ์ด๋ฃจ์–ด์ง„๋‹ค๊ณ  ์ถ”์ธกํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ agent๊ฐ€ 3๋ช…์ผ๋•Œ, DIAL์— ๋Œ€ํ•ด ๋ถ„์„ํ•˜๋Š”๋ฐ, ๊ทธ๋ฆผ (c)๋ฅผ ๋ณด๋ฉด, optimal strategy๋ฅผ ์ฐพ์•„๋‚ด์—ˆ์Šต๋‹ˆ๋‹ค.