๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page

Was this helpful?

  1. II Learning to Communicate
  2. 7. Bayesian Action Decoder

7.1 Introduction

Previous7. Bayesian Action DecoderNext7.2 Setting

Last updated 4 years ago

Was this helpful?

DMARL method์˜ SOTA๋กœ๋Š” ์ด์ „์— ํ–ˆ๋˜ DIAL๊ฐ™์€ communication protocol์„ ๋ฐฐ์šฐ๋Š” ์ข…๋ฅ˜์˜ ์ ‘๊ทผ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Ÿฌํ•œ ์ ‘๊ทผ์—๋Š” ๋‘๊ฐ€์ง€ ์ œํ•œ์ ์ด ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค.

  • ์ฒซ์งธ๋กœ, ์ด๋Š” ์˜ค์ง cheap-talk channel์—๋งŒ ์ ์šฉ๋œ๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. ์ด๋Š” communication์€ ํ™˜๊ฒฝ์— ์•„๋ฌด๋Ÿฐ ์˜ํ–ฅ์„ ๋ผ์น˜์ง€ ๋ชปํ•˜๋Š” action์ž…๋‹ˆ๋‹ค.

  • ๋‘˜์งธ๋กœ, ์ด๋Š” communication๊ณผ ๋‹ค๋ฅธ agent๋“ค์˜ ํ–‰๋™์ด ์™œ ํ–ˆ๋Š”์ง€์— ๋Œ€ํ•œ ๊ฒฐํ•ฉ์ด ์กด์žฌํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

์ด๋Ÿฐ ๋ถ„์•ผ์—์„œ ์ž˜ ์•Œ๋ ค์ง„ ๋„์ „๊ณผ์ œ๋กœ๋Š” (๋ฃฐ์— ๋Œ€ํ•ด ์˜์ƒ์„ ๋ณด์‹œ๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค.)๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ž์‹ ์˜ ํŒจ๋ฅผ ๋ณด์ง€ ๋ชปํ•˜๋Š” ์ƒํƒœ์—์„œ ์„œ๋กœ ํ˜‘๋ ฅํ•ด ์ตœ๋Œ€์˜ ์ ์ˆ˜๋ฅผ ๋งŒ๋“œ๋Š” ๊ฒŒ์ž„์œผ๋กœ, agent๋ผ๋ฆฌ ์†Œํ†ตํ•˜๋Š” ํšจ๊ณผ์ ์ธ convention์„ ๊ฐ€์ ธ์•ผ๋งŒ ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ๋Š” cheap-talk channel์ด ์—†๊ธฐ ๋•Œ๋ฌธ์—, ์ด์ „๊ณผ๋Š” ๋‹ค๋ฅธ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ด์•ผ๋งŒ ํ•ฉ๋‹ˆ๋‹ค. Hanabi์—์„œ์˜ ํ–‰๋™์€ ์ƒ๋Œ€ ํŒจ์˜ ์ •๋ณด๋ฅผ ๋‚ดํฌํ•˜๊ฑฐ๋‚˜, ๋“ฑ๋ก, ํ˜น์€ ๋ฒ„๋ฆฌ๋Š” ํ–‰์œ„๊ฐ€ ์žˆ๋Š”๋ฐ, ์ด๋Ÿฌํ•œ ํ–‰๋™์—์„œ์˜ ์˜๋ฏธ๋Š” ๋‘๊ฐ€์ง€์˜ ๋ ˆ๋ฒจ๋กœ ๋‚˜๋ˆ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฒซ๋ฒˆ์งธ ๋ ˆ๋ฒจ๋กœ๋Š” ๊ทธ ํ–‰์œ„ ์ž์ฒด๋กœ ์ฃผ๋Š” ์ •๋ณด์ž…๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด ์ƒ๋Œ€๋ฐฉ์ด ๋“ค๊ณ ์žˆ๋Š” ์นด๋“œ๊ฐ€ ์–ด๋–ค ์นด๋“œ์ธ์ง€ ์•Œ๋ ค์ฃผ๋Š” ๊ฒƒ์€ ๊ทธ ์ž์ฒด๋กœ reward๋ฅผ ๋†’์ด๊ธฐ ์œ„ํ•œ ์ข‹์€ ์ •๋ณด๊ฐ€ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‘˜์งธ๋กœ๋Š” ๊ทธ ์ •๋ณด๋กœ๋ถ€ํ„ฐ ๋‚ดํฌํ•˜๊ณ ์žˆ๋Š” ์ •๋ณด์ž…๋‹ˆ๋‹ค. ์˜ˆ๋กœ ๋“ค์ž๋ฉด, ์–ด๋–ค agent๊ฐ€ ํ•œ ํ–‰๋™์„ ํ•˜๊ฑฐ๋‚˜ ํ•˜์ง€์•Š๋Š”๋‹ค๋ฉด, ๊ทธ์—๋”ฐ๋ฅธ ์ด์œ ๊ฐ€ ์žˆ์„ ๊ฒƒ์ž„์„์— ๋Œ€ํ•œ ์ •๋ณด์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์ •๋ณด๋ฅผ ์–ป๊ธฐ์œ„ํ•ด์„  ์–ด๋– ํ•œ convention์ด ์žˆ์–ด์•ผ ํ•˜๊ณ , ์ข‹์€ ์ „๋žต์„ ๋งŒ๋“ค์–ด๋‚ด๋Š”๋ฐ ํ•„์ˆ˜์ ์ž…๋‹ˆ๋‹ค.

์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์—ฌ๊ธฐ์„œ๋Š” Bayesian action decoder(BAD)๋ฅผ ์ œ์•ˆํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” Nayyar์˜ ์—ฐ๊ตฌ์—์„œ ์˜๊ฐ์„ ๋ฐ›์•˜๋Š”๋ฐ, BAD๋Š”public belief๋ฅผ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด ๋ชจ๋‘๊ฐ€ ๊ด€์ธก๊ฐ€๋Šฅํ•œ ์š”์†Œ๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ƒˆ๋กœ์šด MDP์ธ public belief Markov Decision Process๋ฅผ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ์˜ action์€ ์˜ค์ง public state์— ์˜ํ•ด์„œ๋งŒ ์„ ํƒ๋˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์ด๋Š” factorized Bayesian update์™€ Neural Network๋ฅผ ํ†ตํ•œ ๊ทผ์‚ฌ๋ฅผ ํ†ตํ•ด ๊ฐ€๋Šฅํ–ˆ๋Š”๋ฐ, ์ด๋Š” Nayyar์˜ ์—ฐ๊ตฌ๋ฅผ ๋„“์€ space๋กœ ํ™•์žฅ์‹œํ‚จ ์ฒซ ์‹œ๋„์ž…๋‹ˆ๋‹ค.

๋งŒ์•ฝ ํ•œ agent๊ฐ€ ๋‹ค๋ฅธ agent์˜ action์„ ๊ด€์ฐฐํ•˜๋ฉด public belief๊ฐ€ ๊ฐ€๋Šฅํ•œ state ์ง‘ํ•ฉ(๋‹ค๋ฅธ agent๊ฐ€ ๊ณ ๋ฅธ ๊ด€์ธก๋œ ํ–‰๋™ ์ง‘ํ•ฉ)์—์„œ sampling๋˜์–ด ๊ฐฑ์‹ ๋ฉ๋‹ˆ๋‹ค. ์ด๋Š” Theory of mind ์ข…๋ฅ˜์™€ ๋น„์Šทํ•œ๋ฐ, ์ด๋Š” ์‚ฌ๋žŒ์˜ ์ผ์ƒ์  ํ–‰๋™์— ๋Œ€ํ•œ ์ถ”๋ก ์„ ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌํ•œ ์ถ”๋ก ์€ ํ•œ ์‚ฌ๋žŒ์ด ์™œ ์—ฌ๋Ÿฌ ์‚ฌ๋žŒ ์‚ฌ์ด์—์„œ ํŠน์ •ํ•œ ํ–‰๋™์„ ์ทจํ–ˆ๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ์ด๊ฒƒ์ด ๊ฐœ์ธ์˜ ๊ด€์ธก์— ๋Œ€ํ•œ ๋ถ„ํฌ์—์„œ ์–ด๋–ค ์ •๋ณด๋ฅผ ๊ฐ€์ง€๊ณ ์žˆ๋Š”์ง€ ์ดํ•ดํ•˜๋ ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค.

๋˜ํ•œ BAD์˜ ์œ ํšจ์„ฑ์— ๋Œ€ํ•ด ์„ค๋ช…์„ ํ•˜๊ณ , Hanabi์˜ ๋ณ€ํ˜• ๋ฒ„์ „์— ์ ์šฉํ•จ์œผ๋กœ์จ, ์ด์ „์˜ ์‹œ๋„๋“ค๋ณด๋‹ค ํ›จ์”ฌ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.

Hanabi