😇
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • 보충
    • 역자 후기
Powered by GitBook
On this page

Was this helpful?

  1. III Learning to Reciprocate
  2. 8. Learning with Opponent-Learning Awareness
  3. 8.3 Methods

8.3.3. Learning via Policy gradient

이번 section에서는 정확한 gradient나 hessian을 모를 때에 approximation을 통해 update하는 방법에 대해 설명합니다.

time step T까지의 trajectory τ=(s0,u01,u02,r01,r02,⋯ ,uT1,uT2,rT1,rT2) \tau = (s_0,u^1_0,u^2_0,r^1_0,r^2_0,\cdots,u^1_T,u^2_T,r^1_T,r^2_T)τ=(s0​,u01​,u02​,r01​,r02​,⋯,uT1​,uT2​,rT1​,rT2​)에 대해, 한 agent의 discounted return Rta(τ)=∑t′=tTγt′−trt′a R^a_t(\tau) = \sum^T_{t'=t}{\gamma^{t'-t}r^a_{t'}}Rta​(τ)=∑t′=tT​γt′−trt′a​을 다음과 같이 정의할 수 있습니다. 그렇다면, agent의 policy(π1,π2)(\pi^1,\pi^2)(π1,π2)에 대해 discounted average reward 는 다음과 같습니다. ER01(τ), ER02(τ)\mathbb{E}R^1_0(\tau), \ \mathbb{E}R^2_0(\tau)ER01​(τ), ER02​(τ)

θ1\theta^1θ1에 대한 ER01(τ)\mathbb{E}R^1_0(\tau)ER01​(τ)의 gradient를 보면, 다음과 같습니다.

∇θ1ER01(τ)=∫∇θ1π1(τ)R01(τ)dτ \nabla_{\theta^1}\mathbb{E}R^1_0(\tau) = \int\nabla_{\theta^1} \pi^1(\tau)R^1_0(\tau)d\tau∇θ1​ER01​(τ)=∫∇θ1​π1(τ)R01​(τ)dτ

=∫π1(τ)∇θ1π1(τ)π1(τ)R01(τ)dτ= \int \pi^1(\tau)\frac{\nabla_{\theta^1} \pi^1(\tau)}{\pi^1(\tau)}R^1_0(\tau)d\tau=∫π1(τ)π1(τ)∇θ1​π1(τ)​R01​(τ)dτ

=∫π1(τ)∇θ1log⁡π1(τ)R01(τ)dτ= \int \pi^1(\tau) \nabla_{\theta^1} \log\pi^1(\tau)R^1_0(\tau)d\tau=∫π1(τ)∇θ1​logπ1(τ)R01​(τ)dτ

=E[∇θ1log⁡π1(τ)R01(τ)]= \mathbb{E}[\nabla_{\theta^1}\log \pi^1(\tau)R^1_0(\tau)]=E[∇θ1​logπ1(τ)R01​(τ)]

그러므로 gradient-based naive learner(NL-PG)는 다음과 같이 update가능합니다.

fnl, pg1=∇θ1ER01(τ)δ\bm{f}^1_{\mathrm{nl,\ pg}} = \nabla_{\theta^1}\mathbb{E}R^1_0({\tau})\deltafnl, pg1​=∇θ1​ER01​(τ)δ

그렇다면, LOLA의 ER0a(τ) \mathbb{E}R^a_0(\tau)ER0a​(τ)는 agent 모두에 의한 gradient를 구해야하고 이는 위와 같이 전개하면 다음과 같이 표현 가능합니다.

∇θ1∇θ2R02(τ)=E[R02(τ)∇θ1log⁡π1(τ)(∇θ2log⁡π2(τ))T] \nabla_{\theta^1} \nabla_{\theta^2}\mathbb{R}^2_0(\tau) = \mathbb{E}[R^2_0(\tau)\nabla_{\theta^1}\log{\pi^1}(\tau)(\nabla_{\theta^2}\log{\pi^2}(\tau))^T]∇θ1​∇θ2​R02​(τ)=E[R02​(τ)∇θ1​logπ1(τ)(∇θ2​logπ2(τ))T]

=E[∑t=0Tγtrt2⋅(∑l=0t∇θ1log⁡π1(ul1∣sl))(∑l=0t∇θ2log⁡π2(ul2∣sl))T= \mathbb{E}[\sum^T_{t=0}\gamma^tr^2_t\cdot(\sum^t_{l=0}{\nabla_{\theta_1}\log{\pi^1(u^1_l|s_l}}))(\sum^t_{l=0}\nabla_{\theta^2}\log{\pi^2(u^2_l|s_l))^T}=E[∑t=0T​γtrt2​⋅(∑l=0t​∇θ1​​logπ1(ul1​∣sl​))(∑l=0t​∇θ2​logπ2(ul2​∣sl​))T

LOLA의 update는 결과적으로 다음과 같이 update하게 됩니다.

flola, pg1=∇θ1ER01(τ)δ+(∇θ2ER01(τ))T∇θ1∇θ2ER02(τ)δη\bm{f}^1_{\mathrm{lola, \ pg}} = \nabla_{\theta^1}\mathbb{E}R^1_0(\tau)\delta+ (\nabla_{\theta^2}\mathbb{E}R^1_0(\tau))^T\nabla_{\theta^1}\nabla_{\theta^2}\mathbb{E}R^2_0(\tau)\delta\etaflola, pg1​=∇θ1​ER01​(τ)δ+(∇θ2​ER01​(τ))T∇θ1​∇θ2​ER02​(τ)δη

Previous8.3.2 Learning with Opponent Learning AwarenessNext8.3.4 LOLA with Opponent modeling

Last updated 4 years ago

Was this helpful?