๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page

Was this helpful?

  1. I Learning to Collaborate
  2. 3. Counterfactual Multi-Agent Policy Gradients

3.2 Related Work

Previous3.1 IntroductionNext3.3 Multi-Agent StarCraft Micromanagement

Last updated 4 years ago

Was this helpful?

์ด์ „ MARL์—ฐ๊ตฌ๋“ค์„ ๋ณด์ž๋ฉด, ์ฒ˜์Œ์—” ๊ต‰์žฅํžˆ ๊ฐ„๋‹จํ•œ ํ™˜๊ฒฝ์— ๋Œ€ํ•œ ์‹คํ—˜์œผ๋กœ ์‹œ์ž‘๋˜์—ˆ๊ณ , ์ด๋•Œ ์•ž์—์„œ ๋ณด์•˜๋˜ IQL์˜ ๋“ฑ์žฅ๊ณผ two player pong์œผ๋กœ์˜ ์ ์šฉ ์ดํ›„ DMARL์— ๋Œ€ํ•œ ํฐ ๊ธฐํ‹€์ด ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

๋˜ํ•œ agent๊ฐ„์˜ communication์— ๋Œ€ํ•œ ํ•„์š”์„ฑ์„ ๋А๋ผ๊ณ , ์ด์— ๋Œ€ํ•œ ์—ฐ๊ตฌ๋„ ์ด๋ฃจ์–ด์กŒ๋Š”๋ฐ, ์ด๋Š” ํ•˜๋‚˜๋Š” agent๊ฐ„์˜ gradient๋ฅผ ํ˜๋ ค๋ณด๋‚ด๋Š” ๋ฐฉ์‹๊ณผ parameter๋ฅผ sharingํ•˜๋Š” ๋ฐฉ์‹ ๋‘๊ฐ€์ง€๊ฐ€ ์ฃผ์š”ํ•œ ๋ฐฉ์‹์œผ๋กœ ์—ฐ๊ตฌ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Ÿฌํ•œ ๋ฐฉ์‹๋“ค์ด ํ•™์ค‘ ์ถ”๊ฐ€์ ์ธ state information(centralized critic์ด global state์„)์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์•˜๊ณ , Credit Assignment Problem์„ ํ•ด๊ฒฐํ•˜์ง€ ์•Š์•˜๋‹ค๋Š” ์ ์—์„œ ํ•œ๊ณ„๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค.

๋˜ํ•œ Gupta, Egorov, Kochenderfer์˜ ์—ฐ๊ตฌ์—์„œ centralized training, decentralized execution์„ ์ ์šฉํ•œ actor-critic์—ฐ๊ตฌ๊ฐ€ ์ง„ํ–‰๋˜์—ˆ์œผ๋‚˜, agent ๋ชจ๋‘ local observation critic์„ ๊ฐ€์ง€๊ณ , credit assignment problem๋ฅผ ์˜ค์ง local reward๋ฅผ ๋งŒ๋“ค์–ด์„œ ํ•œ์ ์—์„œ ํ•œ๊ณ„๊ฐ€ ์žˆ๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

RL์˜ starcraft micromanagement ์ ์šฉ์€ ์ฃผ๋กœ multi agent์— ๋Œ€ํ•œ architectureํŠน์„ฑ์€ ์‚ฌ์šฉํ•˜๋ฉด์„œ๋„ centralized controller์™€ full state๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์—ฐ๊ตฌ๋“ค์ด ์ง„ํ–‰๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์—์„œ๋Š” greedy MDP๋ฅผ ์‚ฌ์šฉํ–ˆ๋Š”๋ฐ ์ด๋Š” ๊ฐ timestep์—์„œ ๋‹ค๋ฅธ agent๋“ค์˜ ์ด์ „์˜ action๋“ค์ด ๋ชจ๋‘ ์ฃผ์–ด์ง„์ƒํƒœ์—์„œ action์„ ์„ ํƒํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ์ด๋Š” ๋…ผ๋ฌธ์˜ Zero-order (ZO) backpropagation algorithm์„ ๋ณด๋ฉด ์ดํ•ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์—์„œ๋Š” RNN์„ ํ†ตํ•ด agent๊ฐ„์˜ ์ •๋ณด ๊ต๋ฅ˜๊ฐ€ ์ผ์–ด๋‚˜๋„๋ก ์„ค๊ณ„ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ์ด๋•Œ Usunier์˜ ์—ฐ๊ตฌ์—์„œ ์—ฌ๊ธฐ์„œ ์“ฐ์ธ ๋น„์Šทํ•œ ์‹คํ—˜์ •์˜๋ฅผ ํ•˜์˜€์œผ๋ฉฐ, DQN baseline์„ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. Omidshafiei์˜ ์—ฐ๊ตฌ์—์„œ๋Š” decentralized training์ค‘์˜ experience replay ์•ˆ์ •์„ฑ์„ ํ•ด๊ฒฐํ•˜์˜€์Šต๋‹ˆ๋‹ค.

Rashid์™€ Sunehag์˜ ์—ฐ๊ตฌ์—์„œ๋Š” agent ๊ฐ์ž์˜ centralized critic์„ ์ œ์•ˆํ–ˆ๊ณ , ์—์„œ๋Š” centralized critic(๋ณธ๋ฌธ์—์„œ๋Š” single critic์ด๋ผ๊ณ  ํ–ˆ์ง€๋งŒ MADDPG์ž์ฒด๊ฐ€ ์—ฌ๋Ÿฌ๊ฐœ์˜ q network๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.)์„ ์ œ์•ˆํ•˜๊ณ  ์ด๋ฅผ decentralized actor๋ฅผ ํ•™์Šตํ•˜๋Š”๋ฐ ์‚ฌ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” COMA์™€ ์œ ์‚ฌํ•œ ๋ฉด์„ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”๋ฐ, ์‹ค์ œ๋กœ ์ด ์—ฐ๊ตฌ๋Š” ์—ฌ๊ธฐ์„œ ์ œ์‹œํ•˜๋Š” ์•„์ด๋””์–ด์™€ ๊ฑฐ์˜ ๋™์‹œ์— ์ด๋ฃจ์–ด์กŒ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—ฌ๊ธฐ์„œ๋Š” Credit Assignment Problem์„ ํ•ด๊ฒฐํ•  ์–ด๋–ค ์ ‘๊ทผ๋„ ํ•˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค.

Usuiner์˜ ์—ฐ๊ตฌ
Peng์˜ ์—ฐ๊ตฌ
Lowe์˜ ์—ฐ๊ตฌ