😇
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • 보충
    • 역자 후기
Powered by GitBook
On this page

Was this helpful?

  1. II Learning to Communicate
  2. 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
  3. 6.4 Methods

6.4.1 Reinforced Inter-Agent Learning

Previous6.4 MethodsNext6.4.2 Differentiable Inter-Agent Learning

Last updated 4 years ago

Was this helpful?

가장 직접적인 방법으로, reinforced inter-agent learning(RIAL)을 제시합니다. 이는 DRQN과 IQL을 함께 적용했습니다. 그렇다면 이 agent의 Q-network의 형태는 다음과 같습니다.

Qa(ota,mt−1a′,ht−1a,ua)Q^a(o^a_t,m^{a'}_{t-1},h^a_{t-1},u^a)Qa(ota​,mt−1a′​,ht−1a​,ua)

Q function은 다음과 같이 observation ota o^a_tota​, message mt−1a′ m^{a'}_{t-1}mt−1a′​, 이전의 hidden state ht−1ah^a_{t-1}ht−1a​, 그리고 agent의 action uau^aua로 이루어집니다. agent의 action은 environment action과 communication action 두 가지 action으로 이루어져있기 때문에, 이 action space는 combinatorial하게 ∣U∣∣M∣|U||M|∣U∣∣M∣이 될 수 있으나, 아예 branch를 분리하여 ∣U∣+∣M∣ |U|+|M|∣U∣+∣M∣하게 구성하였습니다.(action space 감소 목적) 그리하여 action selector가 utau^a_tuta​와 mtam^a_tmta​를 각각 ϵ−greedy \epsilon - \mathrm{greedy}ϵ−greedy하게 뽑게 됩니다.

Qu Q_uQu​와 QmQ_mQm​은 다음과 같이 두 가지 변형이 된 DQN에 따라 학습이 되는데, 이는 성능을 내기 위해 꼭 필요한 테크닉인 것을 실험적으로 보았습니다.

  • 첫째로, experience replay를 쓰지않았습니다. 이는 다른 agent들에 의한 non-stationary가 학습에 치명적임을 발견하였기 때문입니다.

  • 둘째로, 이전 agent의 action u와 m을 각 agent의 input으로 넣어줍니다. 이 flow를 그림으로 보자면 다음과 같습니다.

Parameter Sharing

RIAL은 centralized learning의 장점을 agent간의 parameter sharing으로 가져가는 방법으로 확장될 수 있습니다. 그리고 agent마다 모두 자신만의 index a를 input으로 가지는 network를 구성했습니다. 이렇게 했을 때, common policy를 만들면서도 agent마다의 specialization을 유지할 수 있었습니다. parameter sharing을 통해 agent의 trainable parameter 개수를 줄임으로써 학습 속도를 빠르게 만들었는데 이를 통해 Q function을 다시 나타낸다면 다음과 같습니다.

Qu(ota,mt−1a′,ht−1a,ut−1a,mt−1a,a,uta)    and   Qm(⋅) Q_u(o^a_t,m^{a'}_{t-1},h^a_{t-1},u^a_{t-1},m^a_{t-1},a,u^a_t)\ \ \ \ \mathrm{and} \ \ \ Q_m(\cdot)Qu​(ota​,mt−1a′​,ht−1a​,ut−1a​,mt−1a​,a,uta​)    and   Qm​(⋅)

decentralized execution때, 각 agent는 복제된 network를 사용하지만 각자의 agent가 관측한 observation을통해 hidden state를 갱신하고, action을 고르고 communication하게 됩니다.