๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page

Was this helpful?

  1. III Learning to Reciprocate
  2. 8. Learning with Opponent-Learning Awareness
  3. 8.5 Results

8.5.1 Iterated Games

Previous8.5 ResultsNext8.5.2 Coin Game

Last updated 4 years ago

Was this helpful?

๊ฐ€์žฅ ๋จผ์ € ํ•ด๋ณผ ์‹คํ—˜์€, LOLA agent์™€ NL agent์˜ ํ–‰๋™์„ ๋น„๊ตํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

๋‹ค์Œ ๊ทธ๋ž˜ํ”„์˜ (a)์™€ (b)๋Š” ํ•™์Šต์ด ๋๋‚œ ํ›„ agent๋“ค์˜ policy์— ๋Œ€ํ•œ ๊ทธ๋ž˜ํ”„ ์ž…๋‹ˆ๋‹ค. (a)๋Š” NL-Ex์ด๊ณ , (b)๋Š” LOLA-Ex๋กœ ๋ชจ๋‘ {J1,J2}\{J^1,J^2\}{J1,J2}์— ๋Œ€ํ•œ hessian์— ์ ‘๊ทผ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ์˜ค์ง NL-Ex policy ๊ทธ๋ฃน๊ณผ NOLA-Ex๊ทธ๋ฃน ๋ผ๋ฆฌ์— ๋Œ€ํ•ด ๊ณ ๋ คํ•˜๊ณ , 8.5.3์—์„œ ์ด๋Ÿฐ ํ˜ผํ•ฉ๋œ policy agent ๊ทธ๋ฃน์—์„œ์˜ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ž…๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„  NL-Ex๊ทธ๋ฃน์€ ๋ฌด์กฐ๊ฑด defectํ•˜๋ คํ•˜๊ณ , LOLA-Ex๋Š” agent๋Š” TFT๋ฅผ ๋ฐฐ์šฐ๊ฒŒ ๋˜๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. s0s_0s0โ€‹์—์„œ cooperativeํ•˜๊ณ , agent 2๊ฐ€ ์ด์ „์— ํ˜‘๋ ฅ์ ์ด์—ˆ๋˜ CC์™€ DC์— ๊ด€ํ•ด์„œ๋Š” agent 1์ด ๋‹ค์Œ์— cooperativeํ•  ํ™•๋ฅ ์ด ์šฐ์„ธํ•œ ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๊ฒฐ๊ณผ์ ์œผ๋กœ (c)๋ฅผ ๋ณด๋ฉด, LOLA-Ex ๊ทธ๋ฃน์˜ normalized discounted reward๋Š” -1์— ๊ฐ€๊น๊ฒŒ ์ˆ˜๋ ดํ•˜๊ณ , NL-Ex๊ทธ๋ฃน์€ -2์— ์ˆ˜๋ ดํ•˜๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

(d)๋ฅผ ๋ณด๋ฉด, LOLA-PG์˜ ๊ฒฐ๊ณผ์ธ๋ฐ, ์ด๋„ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๊ฑฐ์˜ ๋น„์Šทํ•œ ๊ฒฐ๊ณผ๊ฐ€ ๋‚˜์™”์Œ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๋‹ค์Œ์œผ๋กœ, ๋˜‘๊ฐ™์ด IMP์— ๋Œ€ํ•ด ์‹คํ—˜ํ•œ ๊ฒฐ๊ณผ์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

์ด ๊ฒŒ์ž„์—์„œ๋Š” NL agent๊ฐ€ ์ˆ˜๋ ดํ•˜๋Š”๋ฐ ์•„์˜ˆ ์‹คํŒจํ•˜์˜€์Šต๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด์—, LOLA์˜ agent๋Š” head์™€ tail์„ ๋ฐ˜๋ฐ˜์”ฉ ๋‚ด์•ผํ•˜๋Š” ๋‚ด์‰ฌ๊ท ํ˜•์— ์ด๋ฅด๋ €์Œ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๋‹ค์Œ์˜ Table์„ ํ†ตํ•ด NL agent์™€ LOLA agent์˜ ์„ฑ๋Šฅ ์ฐจ์ด๋ฅผ ์ž์„ธํžˆ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

IPD์—์„œ๋„ LOLA๋Š” ๋†’์€ ๋น„์œจ๋กœ TFT์„ ์ด๋ค˜๊ณ , IMP์—์„œ๋„ ๋‚ด์‰ฌ ๊ท ํ˜•์„ ๋งž์ท„์Šต๋‹ˆ๋‹ค. ํ•™์Šต์˜ ์•ˆ์ •์„ฑ์€ variance๋กœ ๋ณผ ์ˆ˜ ์žˆ๋Š”๋ฐ, IMP๋ฅผ ๋ณผ ๋•Œ, ํฐ ์ฐจ์ด๊ฐ€ ๋‚จ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๋‹ค์Œ์œผ๋กœ๋Š” ๋‹ค๋ฅธ Algorithm๊ณผ LOLA๋ฅผ ๋น„๊ตํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

์œ„์˜ ๊ทธ๋ฆผ์—์„œ ๋ณด์‹œ๋‹ค์‹œํ”ผ IPD์—์„œ ์ตœ๊ณ ์˜ ์ ์ˆ˜๋ฅผ ๋ฐ›์Œ์„ ๋ณผ ์ˆ˜ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค. ๋˜ IMP์—์„œ๋Š” ์ œ์ผ ์•ˆ์ •์ ์ธ ๋ชจ์Šต์„ ๋ณด์˜€์Šต๋‹ˆ๋‹ค.