๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page
  • Algorithm
  • marginality of policy probability
  • Training

Was this helpful?

  1. I Learning to Collaborate
  2. 4 Multi-Agent Common Knowledge Reinforcement Learning

4.5 Multi-Agent Common Knowledge Reinforcement Learning

Previous4.4 Common KnowledgeNext4.6 Pairwise MACKRL

Last updated 4 years ago

Was this helpful?

MACKRL์˜ ์ฃผ์š” ํฌ์ธํŠธ๋Š” decentralized policy์ด์ง€๋งŒ ํ˜‘๋ ฅํ•˜๋Š” ๊ฒƒ์„ ๋ฐฐ์šด๋‹ค๋Š” ์ ์ž…๋‹ˆ๋‹ค. MACKRL์€ common knowledge๋ฅผ ๊ณต์œ ํ•˜๋Š” agent๋“ค ๋ผ๋ฆฌ์˜ joint policy ฯ€G(uenvGโˆฃLG(ฯ„a)) \pi^\mathcal{G}(\bold{u}^\mathcal{G}_{env}|\mathcal{L}^\mathcal{G}(\tau^a))ฯ€G(uenvGโ€‹โˆฃLG(ฯ„a))๋ฅผ ๋งŒ๋“œ๋Š”๋ฐ ์ด๋Š” centralizedํ•˜๊ฒŒ ํ–‰๋™ํ•˜์ง€๋งŒ, decentralizedํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ ํ•™์Šต๋ฉ๋‹ˆ๋‹ค. (๋’ค์—์„œ ์ข€๋” ์ž์„ธํžˆ ์„ค๋ช…ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค.) ๋ชจ๋“  agent๋Š” common knowledge์™€ ๊ฐ™์€ random seed๋ฅผ ํ†ตํ•ด ๊ทธ๋“ค์ด ์†ํ•œ ๊ทธ๋ฃน์˜ joint action์—์„œ action์„ samplingํ•˜๋Š” ํ˜•์‹์œผ๋กœ ์ด๋ฃจ์–ด ์ง‘๋‹ˆ๋‹ค. ์ด๋•Œ common knowledge๊ฐ€ ์ถฉ๋ถ„ํ•œ ์ •๋ณด๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์œผ๋ฉด, ๊ทธ๋ฃน์˜ policy๋Š” ๊ฝค ์ข‹์€ joint action์„ ๋‚ด๋†“์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ, ์ถฉ๋ถ„ํ•˜์ง€์•Š๋‹ค๋ฉด ์ž‘์€ subgroup์œผ๋กœ ๋ถ„ํ• ๋ฉ๋‹ˆ๋‹ค. subgroup๊ฐ„์—๋Š” ๋”์ด์ƒ ํ˜‘๋ ฅ์ด ์ผ์–ด๋‚˜์ง„ ์•Š์ง€๋งŒ,(joint action select์— ์„œ๋กœ ์˜ํ–ฅ์„ ๋ฏธ์น˜์ง€์•Š์ง€๋งŒ) ๋” ํ’๋ถ€ํ•œ common knowledge๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์„ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ด ๋ชจ๋“  ๊ณผ์ •์€ partially observable ํ–ˆ๋˜ trajectories๋ฅผ ํ†ตํ•œ common knowledge LG(ฯ„a) \mathcal{L}^\mathcal{G}(\tau^a)LG(ฯ„a)์— ์˜ํ•ด์„œ๋งŒ ์ผ์–ด๋‚˜๋ฏ€๋กœ decentralized๋˜์—ˆ๋‹ค๊ณ  ๋ณผ ์ˆ˜์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ์•„์ด๋””์–ด๋ฅผ ์‹คํ˜„ํ•˜๊ธฐ ์œ„ํ•ด hierarchy controller๋ฅผ ์‚ฌ์šฉํ•˜์˜€๋Š”๋ฐ, ๋งจ ์ƒ๋‹จ๊ณผ ์ค‘๊ฐ„ level์˜ controller์—์„œ๋Š” joint action์„ selectํ•˜๊ฑฐ๋‚˜, subgroup์œผ๋กœ ๋‚˜๋ˆ„๋Š” ์—ญํ• ์„ ํ•˜๊ณ , ๋งจ๋งˆ์ง€๋ง‰์—์„  joint action์—์„œ action์„ selectํ•˜๋Š” ํ–‰์œ„๋ฅผ ํ•ฉ๋‹ˆ๋‹ค.

Algorithm

์ด๋ฅผ ์ˆ˜๋„์ฝ”๋“œํ™” ํ•˜๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

์•Œ๊ณ ๋ฆฌ์ฆ˜ ์„ค๋ช…์ด ๋„ˆ๋ฌด ์ž˜๋˜์–ด ์žˆ์–ด line by line ์„ค๋ช…์€ ํ•˜์ง€ ์•Š๊ณ , ์ „์ฒด flow๋ฅผ ํ•œ๋ฒˆ ๋‹ค์‹œ ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. b๋Š” agent ๊ทธ๋ฃน์ž…๋‹ˆ๋‹ค.

b์— ๋”์ด์ƒ ๊ทธ๋ฃน์ด ์—†๋‹ค๋ฉด ๋ฉˆ์ถ”๊ฒŒ ๋˜๋Š” loop๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค.

์ด ์•Œ๊ณ ๋ฆฌ์ฆ˜์—์„œ ํ•„์š”ํ•œ ์ ์€ ๊ทธ๋ ‡๋‹ค๋ฉด ์–ด๋–ป๊ฒŒ groupping์„ ํ•ด์„œ b์— ๋„ฃ์–ด๋‘˜ ๊ฒƒ์ธ๊ฐ€๊ฐ€ ์ค‘์š”ํ•ด์ง‘๋‹ˆ๋‹ค.

marginality of policy probability

policy์— ๋Œ€ํ•œ marginality๋ฅผ ๊ตฌํ•˜๊ธฐ ์œ„ํ•ด์„œ joint policy probability๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

Paths๋Š” hierarchical controller๊ฐ€ ํ•  ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  action์— ๋Œ€ํ•œ ๊ฒฝ์šฐ๋กœ, path๋Š” action selection์„ ํ†ตํ•ด ์–ป์€ ๊ฐ€๋Šฅํ•œ ๊ฒฐ๊ณผ๊ฐ’์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ agent๊ฐ€ ๋งŽ์•„์งˆ ์ˆ˜๋ก, ์ด path์— ๋Œ€ํ•œ ๊ฐœ์ˆ˜๋Š” exponentialํ•˜๊ฒŒ ์ฆ๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ joint probability๋Š” central state information์ด ํ•„์š”ํ•œ๋ฐ, ์ด๋Š” ๋”์ด์ƒ decentralized ๋˜์—ˆ๋‹ค๊ณ  ํ•  ์ˆ˜ ์—†๊ฒŒ๋ฉ๋‹ˆ๋‹ค.

ํ•˜์ง€๋งŒ MACKRL์—์„œ๋Š” marginal probability๋Š” joint probability๋ฅผ ๋ฝ‘๋Š” probability๋งŒ ๊ตฌํ•˜๋ฉด ๋˜๊ธฐ ๋•Œ๋ฌธ์— ์กฐ๊ธˆ ๋” ๊ณ„์‚ฐ๋Ÿ‰์ด ์ ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์œผ๋กœ ๊ณ„์‚ฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ์„ค๋ช…์€ ์ด๋ฏธ ๋‹ค ๋๋‚ฌ์œผ๋ฏ€๋กœ ์‰ฝ๊ฒŒ ์ดํ•ดํ•˜์‹ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

Training

ํ•™์Šต์€ actor-critic form์œผ๋กœ ์ง„ํ–‰๋˜๋Š”๋ฐ, centralized value๋ฅผ ์‚ฌ์šฉํ•˜์ง€๋งŒ MACKRL์ด joint action space์— ๋Œ€ํ•ด correlated probability๋ฅผ ๊ณ„์‚ฐํ•ด์•ผํ•˜๊ธฐ ๋•Œ๋ฌธ์— COMA์˜ baseline์˜ ์ ์šฉ์„ ํ•˜์ง„ ๋ชปํ–ˆ์Šต๋‹ˆ๋‹ค.

ํ•˜์ง€๋งŒ ๋งŽ์€์–‘์˜ partition๊ณผ partition๋‹น ๊ทธ๋ฃน์˜ ๊ฐœ์ˆ˜๋Š” ํ•™์Šต์„ ์–ด๋ ต๊ฒŒ ํ•˜๋Š” ์š”์ธ์ธ๋ฐ ๋‹ค์Œ์žฅ์—์„  ์ด๋ฅผ ๋‹จ์ˆœํ™”ํ•ด ์‰ฝ๊ฒŒ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค.

b์—์„œ ๊ทธ๋ฃนํ•˜๋‚˜๋ฅผ popํ•œ๋’ค, ๊ทธ ๊ทธ๋ฃน์—์„œ์˜ joint action uGu^\mathcal{G}uG๋ฅผ samplingํ•ฉ๋‹ˆ๋‹ค.

๋งŒ์•ฝ ์ด joint action uGu^\mathcal{G}uG๊ฐ€ uGโˆˆUenvGu^\mathcal{G} \in \mathcal{U}^\mathcal{G}_{env}uGโˆˆUenvGโ€‹๋ผ๋ฉด, joint action์— ์„ ํƒ๋˜๊ณ , ๊ทธ๊ฒŒ ์•„๋‹ˆ๋ผ๋ฉด, ์ชผ๊ฐœ์ ธ b๋กœ ๋“ค์–ด๊ฐ€๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡๊ฒŒํ•ด์„œ joint action uenv \bold{u}_{env}uenvโ€‹๊ฐ€ ์„ ํƒ๋ฉ๋‹ˆ๋‹ค.

uGโˆˆUenvGu^\mathcal{G} \in \mathcal{U}^\mathcal{G}_{env}uGโˆˆUenvGโ€‹๊ฐ€ ์•„๋‹ˆ๋ผ๋Š” ๋œป์€, u u u๋‚ด์— ํ™˜๊ฒฝ์—์„œ ์‹คํ–‰๋  ์ˆ˜ ์—†๋Š” delegate action์ด ํฌํ•จ๋˜์–ด ์žˆ๋‹ค๋Š” ๋œป์ž…๋‹ˆ๋‹ค.

P(uenvโˆฃs)=โˆ‘pathโˆˆPathsP(uenvโˆฃs,path)P(pathโˆฃs) P(\bold{u}_{env}|s) = \sum_{\mathrm{path \in Paths}}{P(\bold{u}_{env}|s,\mathrm{path})P(\mathrm{path}|s)}P(uenvโ€‹โˆฃs)=โˆ‘pathโˆˆPathsโ€‹P(uenvโ€‹โˆฃs,path)P(pathโˆฃs)

gradient parameter ฮธ \thetaฮธ์— ๋Œ€ํ•ด policy๋Š” ๋‹ค์Œ์˜ ์‹์œผ๋กœ update๊ฐ€ ์ง„ํ–‰๋ฉ๋‹ˆ๋‹ค.

โˆ‡ฮธJt=(rt+ฮณV(st+1,uenv,t)โˆ’V(st,uenv,tโˆ’1)โˆ‡ฮธlogโก(p(uenv,tโˆฃst)) \nabla_\theta J_t = (r_t+\gamma V(s_{t+1},\bold{u}_{env,t}) - V(s_t,\bold{u}_{env,t-1}) \nabla_\theta\log(p(\bold{u}_{env,t}|s_t))โˆ‡ฮธโ€‹Jtโ€‹=(rtโ€‹+ฮณV(st+1โ€‹,uenv,tโ€‹)โˆ’V(stโ€‹,uenv,tโˆ’1โ€‹)โˆ‡ฮธโ€‹log(p(uenv,tโ€‹โˆฃstโ€‹))