๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page

Was this helpful?

  1. I Learning to Collaborate
  2. 4 Multi-Agent Common Knowledge Reinforcement Learning

4.1 Introduction

Previous4 Multi-Agent Common Knowledge Reinforcement LearningNext4.2 Related Work

Last updated 4 years ago

Was this helpful?

์ด์ „์˜ chapter์—์„œ cooperativeํ•œ ์ƒํ™ฉ์—์„œ ์–ด๋–ป๊ฒŒ centralized value function์„ ๊ฐ€์ง€๊ณ , credit assignment problem์„ ํ•ด๊ฒฐํ• ์ง€์— ๋Œ€ํ•ด ์ƒ๊ฐํ•ด๋ณด์•˜์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ์— ๋Œ€ํ•ด ๋งŽ์€ ์ง„์ฒ™ ์‚ฌํ•ญ๋“ค์ด ์ƒ๊ฒผ์ง€๋งŒ, ์ด fully decentralized agent๋Š” agent๊ฐ„์˜ ํ˜‘๋™ํ•˜๋Š” ๋Šฅ๋ ฅ์— ๋Œ€ํ•ด ๋งŽ์€ ์ œํ•œ์ด ์žˆ์„ ์ˆ˜๋ฐ–์— ์—†์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ๊ฐ€๋” agent๋“ค์€ ๊ทธ๋“ค ์Šค์Šค๋กœ ๊ด€์ธกํ•œ ์ข‹์€ observation์— ๋Œ€ํ•ด์„œ๋„ ๋ฌด์‹œํ•˜๋Š” ๊ฒฝํ–ฅ์ด ์žˆ๋Š”๋ฐ ์ด๋Š” ํŒ€ ์ „์ฒด๋กœ ๋ณด์•˜์„ ๋•Œ, ๋‚ด๊ฐ€ ์ด ํ–‰๋™์„ ํ•˜๋”๋ผ๋„ ๋‹ค๋ฅธ agent๋“ค์ด ์˜ˆ์ธก๊ฐ€๋Šฅํ•˜์ง€์•Š๋‹ค๋ฉด ์ „์ฒด reward๋ฅผ ๋†’์ด๋Š”๋ฐ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ด chapter์—์„œ๋Š” ๊ทธ๋ ‡๊ธฐ์— Multi-Agent Common knowledge Reinforcement Learning(MACKRL)์„ ์ œ์•ˆํ•˜๋Š”๋ฐ, ์ด๋Š” ๊ทธ ๋‘๊ฐ€์ง€์˜ ๊ทนํ•œ์˜ ์ค‘๊ฐ„ ์˜์—ญ์„ ์ฐพ๋„๋ก ๋•์Šต๋‹ˆ๋‹ค. ์ด ๊ฒƒ์˜ ๋ฉ”์ธ ์•„์ด๋””์–ด๋กœ๋Š” partially observable ์ƒํ™ฉ์—์„œ agent๋ผ๋ฆฌ ๊ทธ๋“ค๋ผ๋ฆฌ ํ–‰๋™์„ ์กฐ์œจํ•  ์ˆ˜ ์žˆ๋„๋ก ๋•๋Š” Common Knowledge๋ฅผ ๊ฐ€์งˆ ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. Common Knowledge์˜ ์ •์˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. ๊ฐ agent๋“ค์ด ๋ชจ๋“  ๋‹ค๋ฅธ agent๋“ค์ด ์•„๋Š” ๊ฒƒ์„ ์•Œ๊ณ , ๊ฐ agent๋“ค์ด ๋‹ค๋ฅธ agent๋“ค์ด ๋ชจ๋“  agent๋“ค์ด ์•Œ๊ณ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ๊ณ ์žˆ๋Š” ๊ฒƒ์„ ๋งํ•ฉ๋‹ˆ๋‹ค. ์ง๊ด€์ ์œผ๋กœ common knowledge๋Š” ์„œ๋กœ์˜ ์ƒํƒœ๋ฅผ ์„œ๋กœ ํ™•์ธํ•  ์ˆ˜ ์žˆ์„ ๋•Œ, ๋ถˆํ™•์‹คํ•˜๋˜ ๊ฒƒ์ด ์‚ฌ๋ผ์ง€๋ฉฐ ์ถฉ์กฑ๋ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋‹ค์Œ ๊ทธ๋ฆผ์„ ๋ณด๋ฉฐ ์ž์„ธํ•˜๊ฒŒ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

๊ฐ ์›์€ ๊ฐ agent ์ž์‹ ์˜ observation์ž…๋‹ˆ๋‹ค. ์ด ์ƒํ™ฉ์—์„œ A์™€ B๋Š” ์„œ๋กœ ๊ด€์ธก๊ฐ€๋Šฅํ•˜๋ฏ€๋กœ Common Knowledge๊ฐ€ ์žˆ๋‹ค๊ณ  ๋งํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, C๋Š” A์™€ B๊ฐ€ ๊ด€์ธกํ•  ์ˆ˜ ์—†๋Š” ์œ„์น˜์— ์กด์žฌํ•˜๋ฏ€๋กœ, ์ด๋Š” Common Knowledge๋ฅผ ๊ณต์œ ํ•œ๋‹ค๊ณ  ํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ถ•๊ตฌ๊ฐ™์€ ๊ตฐ์ง‘ํ™”๋œ ์ƒํ™ฉ์—์„œ ์ถฉ๋ถ„ํžˆ ์ƒ๊ฐํ•ด ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Common knowledge๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ์ƒ๊ฐ๋ณด๋‹ค ๊ฝค ์œ ์šฉํ•œ๋ฐ ์ด๋Š” ๊ทธ๋ฃน๋‚ด์˜ ๊ฐ agent๊ฐ€ ์Šค์Šค๋กœ ๊ทธ๋ฃน ๋‚ด์—์„œ ๊ณต์œ ๋˜๋Š” common knowledge๋ฅผ ์ถ”๋ก ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ, common knowledge์— ๊ธฐ๋ฐ˜ํ•œ centralized joint policy๊ฐ€ decentralized๋œ ๋ฐฉ์‹์œผ๋กœ ์‹คํ–‰๋ฉ๋‹ˆ๋‹ค. ๊ฐ agent๋Š” ๋‹จ์ง€ centralized policy์—์„œ ์–ด๋–ค action์„ ์‹คํ–‰ํ• ์ง€๋งŒ ์ „๋‹ฌ๋ฐ›์œผ๋ฉด ๋ฉ๋‹ˆ๋‹ค. ๊ฐ agent๋Š” ๊ฐ™์€ common knowledge๋ฅผ input์œผ๋กœ ๋ฐ›๊ธฐ ๋•Œ๋ฌธ์— ๊ฐ™์€ joint action์„ ์„ ํƒํ•˜๊ณ , ํ˜‘๋ ฅ๋œ ํ–‰๋™์„ ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.

๊ทธ๋Ÿฌ๋‚˜, common knowledge์˜ ๋„์ž…์œผ๋กœ ์ธํ•œ ์ƒˆ ๋ฌธ์ œ์ ์ด ์ƒ๊ธฐ๋Š”๋ฐ, ์ž‘์€ ๊ทธ๋ฃน์ผ์ˆ˜๋ก ๊ฒน์น˜๋Š” common knowledge๊ฐ€ ๋งŽ์•„ ๊ณต์œ ํ•˜๋Š” ์–‘์ด ๋งŽ๊ฒ ์ง€๋งŒ, ์–ด๋–ค ๋ ˆ๋ฒจ์—์„œ์˜ ํ˜‘๋ ฅ์„ ํ•ด์•ผํ• ์ง€ ๋ถˆ๋ช…ํ™•ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฐ˜๋Œ€๋กœ, ํ˜‘๋ ฅ์ด ์™„์ „ํžˆ ์ „๋ฐ˜์ ์œผ๋กœ ์ผ์–ด๋‚œ๋‹ค๋ฉด, fully centralized policy๊ฐ€ ์„ ํƒ๋˜๊ฒ ์ง€๋งŒ, ๊ณต์œ ํ•  ์ˆ˜ ์žˆ๋Š” ์–‘์ด ์ ์–ด ์ตœ์ ์˜ ๊ฒฐ๊ณผ๋ฅผ ๋‚ด๋Š”๋ฐ๋Š” ๋ถ€์กฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๊ทธ๋ ‡๊ธฐ ๋•Œ๋ฌธ์—, MACKRL์—์„œ๋Š” ํ•œ ํŒ€์„ ์–ผ๋งˆ๋‚˜ ์ž‘์€ ๊ทธ๋ฃน์œผ๋กœ ์ชผ๊ฐค ๊ฒƒ์ธ๊ฐ€ ๊ฒฐ์ •ํ•˜๋Š”๊ฒƒ์ด ๊ต‰์žฅํžˆ ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” ๊ณ„์ธต์ ์ธ(hierarchy) ์ ‘๊ทผ์„ ์‹œ๋„ํ–ˆ๋Š”๋ฐ, ๊ฐ hierarchy์—์„œ agent๋Š” ๊ฐ ๊ทธ๋ฃน๋‚ด์—์„œ joint action์„ ์„ ํƒํ•  ๊ฒƒ์ธ์ง€, ๋” ์ž‘์€ subgroup๋กœ ๋‚˜๋‰ ๊ฑด์ง€๋ฅผ ์„ ํƒํ•˜๋Š”๋ฐ, ๊ทธ controller์˜ action์€ ๊ทธ๋Ÿฌ๋ฉด ๋‹ค์Œ hierarchy์— ์žˆ๋Š” controller๋กœ๋ถ€ํ„ฐ ์„ ํƒ๋˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.(์—ฌ๊ธฐ์„œ agent๋Š” controller์™€ ์™„์ „ํžˆ ๊ฐ™์€ ๊ฐœ๋…์ด ์•„๋‹™๋‹ˆ๋‹ค.) MACKRL์—์„œ์˜ action selection์€ ๋‹จ์ˆœํžˆ hierarchy์— ์žˆ๋Š” controller์— ์˜ํ•œ sampling์œผ๋กœ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํ•™์Šต ์ค‘๊ฐ„์— joint action์— ๋Œ€ํ•œ marginality๋Š” ๊ฐ hierarchy๋งˆ๋‹ค ์ทจํ•  ์ˆ˜ ์žˆ์—ˆ๋˜ ๋ชจ๋“  action ์„ ํƒ์— ๋Œ€ํ•ด์„œ ์ด๋ค„์ง‘๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋ฏ€๋กœ, subgroup์ด ๊ฐ€์ง„ parameter๋“ค์ด action selection์ด ๋˜์ง€ ์•Š๋”๋ผ๋„, gradient๋ฅผ ๋ฐ›์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

MACKRL์„ ์‰ฝ๊ฒŒ ๊ตฌํ˜„ํ•œ pairwise MACKRL์„ ๋ณด์ด๋Š”๋ฐ, ์ด๋Š” starcraft2 ํ™˜๊ฒฝ์—์„œ centralized critic์„ ์‚ฌ์šฉํ•˜๋Š” agent baseline์„ ๋ชจ๋‘ ์••๋„ํ•˜๋Š” ์„ฑ๋Šฅ์„ ๋ณด์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ ํ•˜์œ„ hierarchy๋กœ ๊ฒฐ์ •์„ ๋„˜๊ธฐ๋Š” ๊ฒƒ๊ณผ common knowledge์˜ ์–‘ ๊ฐ„์˜ ์œ ์˜๋ฏธํ•œ ๊ด€๋ จ์„ฑ์„ ๋ณด์ž…๋‹ˆ๋‹ค.