๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page

Was this helpful?

  1. III Learning to Reciprocate
  2. 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator

9.1 Introduction

Previous9. DiCE: The Infinitely Differentiable Monte Carlo EstimatorNext9.2 Background

Last updated 4 years ago

Was this helpful?

์ด์ „ chapter์— ๋‹ค๋ฅธ agent๋“ค์˜ ํ•™์Šตํ–‰๋™์„ ์ž์‹ ์˜ optimization term์— ๋„ฃ๋Š” Learning with Opponent-Learning Awareness(LOLA)์— ๋Œ€ํ•ด ๋ฐฐ์›Œ๋ณด์•˜์Šต๋‹ˆ๋‹ค. ์ด ๊ณผ์ •์—์„œ ์—…๋ฐ์ดํŠธํ•˜๋ ค๋Š” agent๋Š” ๋‹ค๋ฅธ agent์˜ learning step์„ ๋ฏธ๋ถ„ํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ณผ์ •์—์„œ ๋†’์€ ์ฐจ์ˆ˜์˜ gradient๋ฅผ ๋ฐœ์ƒ์‹œํ‚ค๋Š” ๊ฒƒ์„ ๋ณด์•˜์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด ๋•Œ, objective๋Š” ๋ฏธ๋ถ„ ๋ถˆ๊ฐ€ํ•˜๊ณ (Go๋ฅผ ์˜ˆ๋กœ ๋“ค์—ˆ์„ ๋•Œ, ์ด๊ฒผ์„ ๋•Œ 1, ์กŒ์„ ๋•Œ -1 ์ด๋ผ๋ฉด, ์ด reward๋Š” ๋‹น์—ฐํžˆ ๋ถˆ์—ฐ์†์ ์ด๊ณ  ๋ฏธ๋ถ„ ๋ถˆ๊ฐ€) ์ด๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด sampling์„ ํ•˜์—ฌ ํ•ด๊ฒฐํ•˜๋Š” monte-carlo-estimation์„ ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์ด ๋•Œ, 1์ฐจ ๋ฏธ๋ถ„์€ policy gradient์—์„œ ๋งŽ์ด ์“ฐ์ด๋Š” score function trick(โˆ‡logโก(ฯ€)\nabla \log(\pi)โˆ‡log(ฯ€))๋ฅผ ์‚ฌ์šฉํ•ด ์ถ”์ • ๊ฐ€๋Šฅํ•˜์ง€๋งŒ ๋†’์€ ์ฐจ์ˆ˜์˜ ๋ฏธ๋ถ„์€ ์ข€ ๋” ๋ณต์žกํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡๊ธฐ์— ๋†’์€ ์ฐจ์ˆ˜์˜ ๋ฏธ๋ถ„์„ ์ข€ ๋” ์ •์˜ํ•  ์ˆ˜ ์žˆ๋‹ค๋ฉด, pytorch๋‚˜ tensorflow ๊ฐ™์€ auto-diff deep learning library์—๊ฒŒ ์ด๋ฅผ ๋งก๊ธฐ๋ฉด ๋ฉ๋‹ˆ๋‹ค. ์ด๋Š” LOLA์—์„œ ์›ํ•˜๋Š” ๊ฒƒ์„ ๋„˜์–ด์„œ ๋‹ค๋ฅธ ์—ฌ๋Ÿฌ optimization technic์— ์œ ์šฉํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ˆ˜๋ ด์„ ๊ฐ€์†ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.์ด๋Š” meta-learning์—์„œ๋„ ์œ ์šฉํ•˜๊ฒŒ ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.

  • Monte-Carlo-Estimation์„ ์‚ฌ์šฉํ•˜๋Š” ์ ‘๊ทผ์€ stochastic computation graph(SCG)๋กœ๋ถ€ํ„ฐ surrogate objective๋ฅผ ์ƒ์„ฑํ•˜๋Š” surrogate loss(SL)๋ฅผ ์‚ฌ์šฉํ•˜๋Š” method๋กœ, (์ด ์šฉ์–ด๋“ค์ด ์ฒ˜์Œ๋“ค์–ด๋ณธ๋‹ค๋ฉด 9.2 Background๋ฅผ ๋จผ์ € ์ฝ๊ณ  ์˜ค์‹œ๋Š”๊ฒƒ์ด ๋„์›€์ด ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.) ๋ฏธ๋ถ„์ด ๋  ๋•Œ, SL๊ฐ€ ์›๋ž˜ objective์˜ 1์ฐจ ๋ฏธ๋ถ„์˜ ์ถ”์ •๊ฐ’์„ ๋‚ด๋†“๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.

ํ•˜์ง€๋งŒ ์œ„์—์„œ ์–ธ๊ธ‰ํ•œ 1์ฐจ gradient๋ฅผ ๊ตฌํ–ˆ๋˜ ๋ฐฉ๋ฒ•์€ ๋†’์€ ์ฐจ์ˆ˜์˜ ๋ฏธ๋ถ„๊ฐ’์„ ๊ตฌํ•  ๋•Œ auto-diff library์— ์ž˜ ๋งž์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋†’์€ ์ฐจ์ˆ˜์˜ gradient estimator๋„ ๋˜‘๊ฐ™์€ score function trick์„ ์‚ฌ์šฉํ•ด์•ผํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ฏธ๋ถ„ ๊ฐ’์ด sampling distribution์— ์˜์กดํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ์ด์ฒ˜๋Ÿผ ๋ฏธ๋ถ„ํ•˜๋Š” ๊ฒƒ์€ ์ด๋ฏธ Finn์˜ ์—ฐ๊ตฌ๊ฒฐ๊ณผ์—์„œ ์ž˜๋ชป๋œ term์„ ๊ฐ€์ ธ์˜จ๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์˜€์Šต๋‹ˆ๋‹ค.

๋†’์€ ์ฐจ์ˆ˜์˜ score function gradient estimator๋ฅผ ๊ตฌํ•˜๋Š”๋ฐ ๋งŒ์กฑ๋˜์ง€ ์•Š๋Š” ๋‘ ๊ฐ€์ง€ ์ ์ด ์กด์žฌํ•ฉ๋‹ˆ๋‹ค.

  • ์ฒซ์งธ๋กœ, estimator๋ฅผ sampling๊ฐ™์€ ๋ฐฉ๋ฒ•์ด ์•„๋‹Œ analyticalํ•˜๊ฒŒ ์œ ๋„ํ•˜๊ฑฐ๋‚˜ ๊ตฌํ˜„ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋ฒˆ๊ฑฐ๋กญ๊ณ  ์˜ค๋ฅ˜๊ฐ€ ๋‚˜๊ธฐ ์‰ฝ๊ณ  auto-diff์— ์ ์šฉ๋˜๊ธฐ ์–ด๋ ต์Šต๋‹ˆ๋‹ค.

  • ๋‘˜์งธ๋กœ, ์ƒˆ๋กœ์šด objective๋ฅผ ์œ„ํ•œ SL์˜ ๋ฐ˜๋ณต์ ์ธ ์ ์šฉ์„ ์ ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ•์ธ๋ฐ, ์ด๋Š” ์ ์  ๋ณต์žกํ•œ ๊ทธ๋ž˜ํ”„๋ฅผ ์ˆ˜๋ฐ˜ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.

SL์—์„  1์ฐจ ๋ฏธ๋ถ„ ํ›„ cost๋ฅผ ๊ณ ์ •๋œ sample๋กœ ๋‹ค๋ฃน๋‹ˆ๋‹ค. ์ด๊ฒƒ์ด ๋†’์€ ์ฐจ์ˆ˜์˜ gradient estimator๋กœ ๊ฐ”์„ ๋•Œ, ์–ผ๋งˆ๋‚˜ ๋น„์ •ํ™•ํ•œ term์„ ๋งŒ๋“œ๋Š”์ง€ ๋ณด์ด๋Š”๋ฐ, ์ด๋Š” ๋†’์€ ์ฐจ์ˆ˜์˜ gradient๊ฐ€ ์ ์šฉ๋˜์•ผํ•˜๋Š” method์˜ ์ ์šฉ ๋ฒ”์œ„๋ฅผ ์ œํ•œํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.

์—ฌ๊ธฐ์—์„  ์œ„์˜ ๋ฌธ์ œ์ ๋“ค์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด Infinitely Differentiable Monte-Carlo Estimator(DiCE)๋ฅผ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค. DiCE๋Š” ์›๋ž˜์˜ objective๋ฅผ ์ถ”์ •ํ•  ์ˆ˜ ์žˆ๋Š” ์ •ํ™•ํ•œ ๋ฏธ๋ถ„๊ฐ’์„ ์–ป์„ ์ˆ˜ ์žˆ๋‹ค๋Š” ํŠน์ง•์ด ์žˆ์Šต๋‹ˆ๋‹ค. SL์˜ ์ ‘๊ทผ๋ฐฉ๋ฒ•๊ณผ๋Š” ๋‹ค๋ฅด๊ฒŒ DiCE๋Š” ๋†’์€ ์ฐจ์ˆ˜์˜ gradient๊ณ„์‚ฐ์„ auto-diff์— ์˜์กดํ•ฉ๋‹ˆ๋‹ค.

Dice๋Š” Operator MAGICBOX(์—ฌ๊ธฐ์„œ๋Š” โ–ก\squareโ–ก๋กœ ํ‘œํ˜„ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.)๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. SCG์—์„œ original loss์— ์˜ํ–ฅ์„ ๋ฏธ์น˜๋Š” stochastic node Wc\mathcal{W}_cWcโ€‹๋ฅผ ๋ชจ๋‘ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. ๋งŒ์•ฝ ๋ฏธ๋ถ„์ด ์ผ์–ด๋‚œ๋‹ค๋ฉด, ์ด MagicBox๋Š” sampling distribution์— ๋Œ€ํ•œ ์ •ํ™•ํ•œ gradient๋ฅผ ๋‚ด๋†“์Šต๋‹ˆ๋‹ค. MagicBox๋Š” ๋‘ ๊ฐ€์ง€ ์„ฑ์งˆ์„ ๊ฐ€์ง€๋Š”๋ฐ, ์ด๋Š” ๋’ค์—์„œ ์ž์„ธํ•˜๊ฒŒ ์„ค๋ช…ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.

โˆ‡ฮธโ–ก(Wc)=โ–ก(Wc)โˆ‡ฮธโˆ‘wโˆˆWclogโก(p(w;ฮธ)) \nabla_{\theta}\square(\mathcal{W}_c) = \square(\mathcal{W}_c)\nabla_{\theta}\sum_{w\in \mathcal{W}_c}{\log(p(w;\theta))}โˆ‡ฮธโ€‹โ–ก(Wcโ€‹)=โ–ก(Wcโ€‹)โˆ‡ฮธโ€‹โˆ‘wโˆˆWcโ€‹โ€‹log(p(w;ฮธ))

โ–ก(W)โ†’1 \square(\mathcal{W}) \rightarrow 1โ–ก(W)โ†’1

MagicBox operator๋Š” ์œ„์˜ ํŠน์„ฑ์„ ๊ฐ€์ง€๊ธฐ ์œ„ํ•ด auto-diff library์—์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์‰ฝ๊ฒŒ ๊ตฌํ˜„ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค.

โ–ก(W)=expโก(ฯ„โˆ’โŠฅ(ฯ„)) \square(\mathcal{W}) = \exp(\tau-\bot(\tau))โ–ก(W)=exp(ฯ„โˆ’โŠฅ(ฯ„))

ฯ„=โˆ‘wโˆˆWlogโก(p(w;ฮธ) \tau = \sum_{w \in \mathcal{W}}{\log(p(w;\theta)}ฯ„=โˆ‘wโˆˆWโ€‹log(p(w;ฮธ)

โŠฅ \botโŠฅ๋Š” โˆ‡xโŠฅ(x)=0 \nabla_x\bot(x)=0โˆ‡xโ€‹โŠฅ(x)=0์ด ๋˜๋Š” operator์ž…๋‹ˆ๋‹ค. ์ดํ›„์— ์–ด๋–ป๊ฒŒ baseline์„ ํ†ตํ•ด variance๋ฅผ ์ค„์ด๋Š”์ง€์— ๋Œ€ํ•ด ๋ณด์ž…๋‹ˆ๋‹ค.

์ด๋ฒˆ chapter๋‚ด์—์„œ DiCE์˜ ์ฆ๋ช…๊ณผ ์‹คํ—˜์„ ํ†ตํ•ด ์ •ํ™•์„ฑ์— ๋Œ€ํ•ด ๋ณด์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ, LOLA์— DiCE๋ฅผ ์ ์šฉํ•œ ๋ชจ์Šต๋„ ๋ณด์ž…๋‹ˆ๋‹ค.