๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page

Was this helpful?

  1. II Learning to Communicate
  2. 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
  3. 6.6 Experiments

6.6.4 Effect of Channel Noise

Previous6.6.3 MNIST GamesNext6.7 Conclusion & Future Work

Last updated 4 years ago

Was this helpful?

noise ฯƒ \sigmaฯƒ๊ฐ€ ์–ด๋–ป๊ฒŒ communication channel์— ์˜ํ–ฅ์„ ์ฃผ๋Š”์ง€์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ์ฒซ๋ฒˆ์งธ ์ง๊ด€์€ sigmoid์˜ ํŠน์„ฑ์„ ํ†ตํ•ด ์•Œ์•„๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” x์˜ ๊ฐ’์— ๋”ฐ๋ผ y๊ฐ’์€ (0,1)๋กœ ๋Œ€์‘์‹œํ‚ฌ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๋•Œ, y์˜ 0.01์™€ 0.99๋ฒ”์œ„๋‚ด์— ๋“ค์–ด์˜ค๊ธฐ์œ„ํ•œ x์˜ ๋ฒ”์œ„๋ฅผ 10์ •๋„๋กœ ์ƒ๊ฐํ–ˆ์„ ๋•Œ, x๋Š” ํ‘œ์ค€ํŽธ์ฐจ์— 6๋ฐฐ์ •๋„๋Š” ๋–จ์–ด์ ธ์žˆ์–ด์•ผ ํ•˜๊ณ , ์ด๋Š” ฯƒ=2 \sigma = 2ฯƒ=2์ •๋„๋กœ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์š”๊ตฌ๋˜๋Š” ฯƒ \sigmaฯƒ๋ฅผ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด channel logistic function๊ณผ Gaussian noise๊ฐ€ ํฌํ•จ๋œ channel์˜ 1ํšŒ ์ •๋ณด ์ „์†ก๊ฐ€๋Šฅ์–‘์— ๋Œ€ํ•œ ์‹œ๊ฐํ™”๋ฅผ ํ•ด๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ ‡๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด, ๋จผ์ € message mmm์„ ๋ฐ›์•˜์„ ๋•Œ, ๋‹ค์Œ message m^\hat{m}m^์„ ๋ณด๋‚ผ ํ™•๋ฅ ์— ๋Œ€ํ•ด ๋ถ„ํฌ๋ฅผ ๋”ฐ์ ธ๋ณด์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œํ˜„ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค.

P(m^โˆฃm)=12ฯ€ฯƒm^(1โˆ’m^)expโก(โˆ’(mโˆ’logโก(1m^โˆ’1))2ฯƒ2) P(\hat{m}|m) = \frac{1}{\sqrt{2\pi\sigma}\hat{m}(1-\hat{m})}\exp(-\frac{(m-\log(\frac{1}{\hat{m}}-1))^2}{\sigma^2})P(m^โˆฃm)=2ฯ€ฯƒโ€‹m^(1โˆ’m^)1โ€‹exp(โˆ’ฯƒ2(mโˆ’log(m^1โ€‹โˆ’1))2โ€‹)

์–ด๋–ค mmm์— ๋Œ€ํ•ด์„œ๋„ channel์„ ํ†ตํ•ด ๋‚˜๊ฐ€๋Š” message m^ \hat{m}m^๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ถ„ํฌ๋ฅผ ๋”ฐ๋ฆ…๋‹ˆ๋‹ค. ๋ฉ”์„ธ์ง€mmm์€ ๋ณด๋‚ด๋Š” message m^\hat{m}m^์ด ๊ฒน์น ํ™•๋ฅ ์ด ์ ์„ ๋•Œ, m1m_1m1โ€‹์™€ m2m_2m2โ€‹๋กœ ๊ตฌ๋ณ„๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๋”ฐ๋ผ์„œ m1m_1m1โ€‹๊ฐ€ ์ฃผ์–ด์ง€๋ฉด, ์šฐ๋ฆฌ๋Š” m1m_1m1โ€‹๊ฐ€ ์ƒ์„ฑํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๊ฐ€์žฅ ๋†’์€ m^1\hat{m}_1m^1โ€‹๊ฐ’์ด m2m_2m2โ€‹๊ฐ€ ์ƒ์„ฑํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๊ฐ€์žฅ ๋‚ฎ์€ m^2 \hat{m}_2m^2โ€‹๊ฐ’๋ณด๋‹ค ์ž‘์„ ๋•Œ m2m_2m2โ€‹๋ฅผ ๋‹ค์Œ ๊ฐ’์œผ๋กœ ์„ ํƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฐ ๊ฐ€์ •์€ ๋‹ค์Œ์˜ ์ˆ˜์‹์ด ๋งŒ์กฑํ•  ๋•Œ ๋ฐœ์ƒํ•ฉ๋‹ˆ๋‹ค.

(maxโกm^ย s.t.P(m^โˆฃm1)>ฯต)ย ย =(minโกm^ย s.t.P(m^โˆฃm1)>ฯต)(\max_{\hat{m}}\ s.t.P(\hat{m}|m_1)>\epsilon) \ \ = (\min_{\hat{m}}\ s.t.P(\hat{m}|m_1)>\epsilon) (maxm^โ€‹ย s.t.P(m^โˆฃm1โ€‹)>ฯต)ย ย =(minm^โ€‹ย s.t.P(m^โˆฃm1โ€‹)>ฯต)

ํฅ๋ฏธ๋กœ์šด ์‚ฌ์‹ค์€ ์–ด๋А์ •๋„์˜ noise๊ฐ€ channel์„ regularizationํ•˜๋Š”๋ฐ ๊ผญ ํ•„์š”ํ•˜๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค.

์œ„์˜ ๊ทธ๋ฆผ์€ ๋” ๋งŽ์€ reward๋ฅผ ๋งŒ๋“ค์–ด๋‚ด์ง€ ์•Š๋Š” communicaiton์— ๋Œ€ํ•œ ์‹คํ—˜์ž…๋‹ˆ๋‹ค. ์ž‘์€ ํฌ๊ธฐ์˜ noise๋Š” discretization์„ ํ•˜๋Š”๋ฐ ๋ฌธ์ œ๊ฐ€ ์žˆ์ง„ ์•Š์Šต๋‹ˆ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ด๋Š” ์–ด์ฐจํ”ผ activation์„ sigmoid์˜ ์ขŒ์šฐ tail๋กœ ๋ฐ€๋ฉด์„œ reward๋ฅผ ์ตœ๋Œ€ํ™” ํ•˜๋ คํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ์œ„์˜ ์‹คํ—˜์€ Regularized๋ฅผ ๋„˜์œผ๋ฉด, discretization ํ›„์— training๋ณด๋‹ค ์„ฑ๋Šฅ์ด ์ข‹์•˜๋‹ค๋Š” ๋œป์ธ๋ฐ, channel์ด ์ž˜ regularization๋˜์—ˆ๊ณ , ์ด๋ฅผ 0๊ณผ 1์˜ bit๋กœ ์ด์šฉํ•œ๋‹ค๋Š” ๋œป์ž…๋‹ˆ๋‹ค. MNIST์‹คํ—˜์—์„œ 10๊ฐœ์˜ ์ˆซ์ž๋Š” encodingํ•˜๊ธฐ ์œ„ํ•ด 4๊ฐœ์˜ bits๊ฐ€ ํ•„์š”ํ•œ๋ฐ, step์ด ์ค„์„์ˆ˜๋ก channel์— ์ •๋ณด๋ฅผ ๋” ๋‹ด๋Š”๊ฒƒ์ด ์ด๋“์ด ๋ฉ๋‹ˆ๋‹ค. ์ด๋Š” discretzation์— ํฐ ๋ฐฉํ•ด๊ฐ€ ๋˜๋Š”๋ฐ, noise๊ฐ€ ์ ์œผ๋ฉด ์ด๋ ‡๊ฒŒ training๋•Œ continuous ํ•จ์„ ์ด์šฉํ•ด ๋” ๋งŽ์€ ์ •๋ณด๋“ค์ด ๋‹ด๊ธฐ๋ฏ€๋กœ ์ œ๋Œ€๋กœ regularization์„ ํ•ด๋‚ด์ง€ ๋ชปํ•˜๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.