๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page

Was this helpful?

  1. II Learning to Communicate
  2. 7. Bayesian Action Decoder
  3. 7.3 Method

7.3.1 Public belief

Single-agent partially observable์ƒํ™ฉ์—์„œ๋Š” agent๊ฐ€ ๋ณด์ด์ง€ ์•Š๋Š” ํ™˜๊ฒฝ์— ๋Œ€ํ•œ ์ž์‹ ๋งŒ์˜ belief ๊ฐ–๋Š” ๊ฒƒ์ด ๊ฝค ์œ ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ Multi-agent ์ƒํ™ฉ์—์„œ๋Š” ๋‹ค๋ฅธ agent๋“ค์˜ ๊ด€์ธก ์ƒํ™ฉ(ํ˜น์€ policy)๋“ฑ์— ๋”ฐ๋ผ MDP๋Š” ๋ณ€ํ•˜๊ฒŒ ๋˜๊ณ , ๊ทธ๋Ÿฐ belief๋Š” ์‰ฝ๊ฒŒ ๊นจ์งˆ ์ˆ˜ ์žˆ์œผ๋ฏ€๋กœ, ํ™˜๊ฒฝ์— ๋Œ€ํ•œ belief์„ ํ˜ผ์ž ๊ฐ€์ง€๊ณ  ์žˆ๋Š” ๊ฒƒ์ด ๋” ์ด์ƒ ์ถฉ๋ถ„ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. Interactive POMDPs(I-POMDPs)์—์„  agent๋Š” ์„œ๋กœ์— ๋Œ€ํ•œ belief ๊ทธ๋Ÿฌํ•œ belief์— ๋Œ€ํ•ด modelingํ•ด๋‚ด๋Š”๋ฐ, ์ด๋Š” ๊ณ„์‚ฐ์ ์œผ๋กœ ๋ถˆ๊ฐ€๋Šฅํ•œ ๋ถ€๋ถ„๋“ค์ด ์žˆ์Šต๋‹ˆ๋‹ค.

public belief Bt\mathcal{B}_tBtโ€‹๋Š” ๋ชจ๋“  ์ด์ „์˜ ์•Œ๋ ค์ง„ public features๋กœ๋ถ€ํ„ฐ์˜ ํ•œ agent์˜ ์ƒˆ๋กœ์šด private state features์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

Bt=P(ftpriโˆฃfโ‰คtpub) \mathcal{B}_t = P(f^\mathrm{pri}_t|f^\mathrm{pub}_{\leq t})Btโ€‹=P(ftpriโ€‹โˆฃfโ‰คtpubโ€‹)

Bt \mathcal{B}_tBtโ€‹๋Š” ๋ชจ๋‘์—๊ฒŒ ์•Œ๋ ค์ง„ ์ •๋ณด์— ๋Œ€ํ•ด์„œ, ์ฆ‰ ๊ฐ agent๋กœ๋ถ€ํ„ฐ ๋ชจ๋‘์—๊ฒŒ ์•Œ๋ ค์ง„ algorithm์„ ์‚ฌ์šฉํ•ด ๋…๋ฆฝ์ ์œผ๋กœ ๊ฐ™์€ public belief๋ฅผ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋Š” ์ •๋ณด์— ์˜ํ•ด ๋งŒ๋“ค์–ด์ง‘๋‹ˆ๋‹ค. ๊ทธ๋ ‡๋‹ค๋ฉด agent๋Š” ๊ทธ๋“ค์˜ private observation์„ ์ด์šฉํ•˜์ง€ ์•Š๊ณ , public belief๋งŒ ์ด์šฉํ•ด ์ถ”๋ก ํ•˜๋Š” ์ƒํ™ฉ์„ ์–ด๋–ป๊ฒŒ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์„์ง€์— ๋Œ€ํ•œ ๊ณ ๋ฏผ์„ ํ•ด์•ผํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” Nayyar๊ฐ€ ์ œ์•ˆํ•œ๋Œ€๋กœ public observation๊ณผ public belief์„ ํ†ตํ•ด ํ•™์Šตํ•˜๋ฉด ์ด๋Š” optimal policy๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ ์ด์œ ๋Š”ฯ€BAD \pi_{\mathrm{BAD}}ฯ€BADโ€‹๊ฐ€ partial policy(์ž์‹ ์˜ ๊ด€์ธก์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” policy)๋ฅผ ๊ณจ๋ผ ์ด policy๊ฐ€ action์„ ๊ณ ๋ฅด๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์ด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

ฯ€^:{faโ†’U} \hat{\pi} : \{ f^a \rightarrow \mathcal{U} \}ฯ€^:{faโ†’U}

์ด๋ ‡๊ฒŒ partial policy๋Š” deterministicํ•˜๊ฒŒ ์„ ํƒ๋จ์œผ๋กœ์จ, policy gradient๋กœ communication protocol์€ high entropy entropy๋ฅผ ๊ฐ€์ง€๊ฒŒ ํ•™์Šต์ด ๋˜๊ณ , communication์€ low entropy๋ฅผ ๊ฐ€์ง€๊ฒŒ ํ•™์Šต์ด ๋˜๋Š” ํŠน์„ฑ์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.

์ง๊ด€์ ์œผ๋กœ public agent๋Š” ์˜ค์ง public observation๊ณผ belief๋ฅผ ๊ด€์ฐฐํ•˜๋Š” 3์ž์ฒ˜๋Ÿผ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ฯ€BAD \pi_{\mathrm{BAD}}ฯ€BADโ€‹๊ฐ€ private state๋ฅผ ๋ณด์ง„ ๋ชปํ•˜์ง€๋งŒ ๊ฐ agent์—๊ฒŒ ์–ด๋–ค private observation์„ ๋ฐ›์•˜์„ ๋•Œ ์–ด๋–ป๊ฒŒ ํ•˜๋ผ๋Š”์ง€ ์•Œ๋ ค์ค„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฆ‰ ๊ฐ time step์—์„œ public agent๋Š” Bt\mathcal{B}_tBtโ€‹์™€ ftpub f^\mathrm{pub}_tftpubโ€‹์— ๊ธฐ๋ฐ˜ํ•ด partial policy ฯ€^ \hat{\pi}ฯ€^๋ฅผ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฐ ๋‹ค์Œ, partial policyฯ€^ \hat{\pi}ฯ€^๋Š” ์ž์‹ ์˜ private state๋ฅผ ์ด์šฉํ•ด action์„ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค.

ฯ€^(fa)=uta \hat{\pi}(f^a) = u^a_tฯ€^(fa)=utaโ€‹

๊ทธ ๋‹ค์Œ public agent๋Š” observed action utau^a_tutaโ€‹๋ฅผ ์ด์šฉํ•ด ์ƒˆ belief Bt+1\mathcal{B}_{t+1}Bt+1โ€‹๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค.

Previous7.3 MethodNext7.3.2 Public Belief MDP

Last updated 4 years ago

Was this helpful?