😇
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • 보충
    • 역자 후기
Powered by GitBook
On this page

Was this helpful?

  1. III Learning to Reciprocate
  2. 8. Learning with Opponent-Learning Awareness
  3. 8.4 Experimental Setup

8.4.1 Iterated Games

Previous8.4 Experimental SetupNext8.4.2 Coin Game

Last updated 4 years ago

Was this helpful?

여기서는 IPD와 IMP에 대해 실험하는데 어떻게 memory-1 two-agent MDP로 modeling하는지 설명합니다.

먼저 IPD를 보자면, 위의 그림은 한 step에서 두 agent의 선택에 따른 각자 받는 reward를 나타내는 matrix입니다. 1 step으로 끝난다면, 내쉬 균형은 모두 Defeat하는 선택밖에 없습니다. 하지만 무한으로 반복된다면 내쉬 균형은 무한개가 존재하게 됩니다. 이 중 가장 눈여겨볼만한 policy에는 defeat strategy(DD)와 tit-for-tat(TFT)가 있습니다. TFT에서 각 agent는 처음엔 cooperation을하고, 다음부터는 상대의 action을 따라하는 것입니다. 이때 TFT와 DD 전 각각의 agent가 1 step에 받는 평균 reward는 -1과 -2입니다.

Matching pennies는 zero-sum game으로, 각 step에서 각 agent가 받는 reward는 아래와 같습니다.

이 게임에서는 내쉬 균형을 이루는 단 하나의 혼합전략(head와 tail을 반반내는) 만이 존재합니다.

두 게임내의 agent들은 그들의 history에 따라 action을 취하게 되고, 이는 agent가 각 길이 KKK의 메모리를 통해 history를 보관한다고 볼 수 있습니다. agent는 이KKK step을 보고 action을 취하게 됩니다. Press는 memory 길이가 1인 좋은 전략을 가진 agent가 이 iterated게임을 memory가 1인 iterated 게임이 되도록 만들 수 있음(memory 1만가지고도 우세하니 모두 memory 1만 가지게 되어)을 보였습니다. 그러므로 여기서는 memory-1 iterated game으로 문제를 고려하기로 합니다.

그리하여, IPD와 IMP를 memory length가 1인 두 agent의 MDP로 모델링합니다. 이 때 state 0 은 존재하지 않습니다.

st=(ut−11,ut−12),  for  t≥1. s_t = (u^1_{t-1},u^2_{t-1}), \ \ \mathrm{for}\ \ t \geq 1.st​=(ut−11​,ut−12​),  for  t≥1.

각 agent의 policy는 다섯 가지 상황에 의해 action을 취하게 되는데, 항상 cooperation을 한다면 이는 맨 처음 πa(C∣s0)\pi^a(C|s_0)πa(C∣s0​), 그리고, 각 agent의 행동 DD,DC,CD,CC에 의한 4가지 policy πa(C∣DD),πa(C∣DC),πa(C∣CD),πa(C∣CC) \pi^a(C|DD),\pi^a(C|DC),\pi^a(C|CD),\pi^a(C|CC)πa(C∣DD),πa(C∣DC),πa(C∣CD),πa(C∣CC)로 나타납니다. 이렇게 분석적으로 Multi-agent MDP를 exact policy update를 통해(NL-Ex와 LOLA-Ex) future discounted reward를 얻어낼 수도 있습니다. 이때 LOLA-Ex의 성능을 다른 algorithm과 비교하기 위해, 리그전 형식으로 비교하였습니다.