๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page

Was this helpful?

  1. I Learning to Collaborate
  2. 4 Multi-Agent Common Knowledge Reinforcement Learning

4.3 Dec-POMDP and Features

์ด chapter์—์„œ๋Š” MACKRL์—์„œ ๋ฌธ์ œ๋ฅผ ์ •์˜ํ•  ๋•Œ ๊ฐ€์ •ํ•˜๋Š” decentralized partially observable Markov decision processes(Dec-POMDP) ์—์„œ์˜ ์ •์˜๋“ค์— ๋Œ€ํ•ด ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.

Dec-POMDP์—์„œ state sโˆˆS s \in SsโˆˆS๋Š” entities eโˆˆฮพ e \in \xieโˆˆฮพ์œผ๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ž˜์„œ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œํ˜„ํ•ฉ๋‹ˆ๋‹ค. s={seโˆฃeโˆˆฮพ} s = \{ s^e |e \in \xi\}s={seโˆฃeโˆˆฮพ} ๊ทธ๋ ‡๋‹ค๋ฉด, agent๋˜ํ•œ ๊ด€์ธก๊ฐ€๋Šฅํ•œ entities๋กœ ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. aโˆˆAโІฮพ a \in \mathcal{A} \subseteq \xi aโˆˆAโІฮพ. ๊ทธ ์™ธ์—๋„, ์ , ์žฅ์• ๋ฌผ, ๋ชฉํ‘œ๋“ฑ ๋ชจ๋‘ entities๋กœ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

๊ฐ timestep๋งˆ๋‹ค, ๊ฐ agent๋Š” action์„ ๊ฒƒ์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

uenvaโˆˆUenva(s) u^a_{env} \in \mathcal{U}^a_{env}(s)uenvaโ€‹โˆˆUenvaโ€‹(s)

subscript๋Š” environment์— ์ง์ ‘ ์˜ํ–ฅ์„ ๋ฏธ์นœ๋‹ค๋Š” ์˜๋ฏธ์˜ env์ž…๋‹ˆ๋‹ค. joint action์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜ํ•ฉ๋‹ˆ๋‹ค.

uenv=(uenv1,...,uenvn)โˆˆUenva(s) \bold{u}_{env} = (u^1_{env}, ... ,u^n_{env})\in \mathcal{U}^a_{env}(s)uenvโ€‹=(uenv1โ€‹,...,uenvnโ€‹)โˆˆUenvaโ€‹(s)

next state sโ€ฒโˆˆS s' \in \mathcal{S}sโ€ฒโˆˆS์ผ ๋•Œ, transition probability๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜ํ•ฉ๋‹ˆ.

P(sโ€ฒโˆฃs,uenv) P(s'|s,\bold{u}_{env})P(sโ€ฒโˆฃs,uenvโ€‹)

reward function ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜ํ•ฉ๋‹ˆ๋‹ค.

r(s,uenv) r(s,\bold{u}_{env})r(s,uenvโ€‹)

agent๋Š” partial observability๋ฅผ ๊ฐ€์ง€๋Š”๋ฐ, ๊ฐ time-step๋งˆ๋‹ค ๊ฐ agent aaa๋Š” observation oaโˆˆZ o^a \in \mathcal{Z} oaโˆˆZ(agent๊ฐ€ ๋ณผ ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  entites๋“ค์„ ํฌํ•จํ•œ state features se s^ese์˜ ๋ถ€๋ถ„ ์ง‘ํ•ฉ์„ ํฌํ•จํ•˜๋Š” ์ง‘ํ•ฉ)๋ฅผ ๋ฐ›์Šต๋‹ˆ๋‹ค. ์ด๋•Œ agent aaa๊ฐ€ entities e e e๋ฅผ ๊ด€์ฐฐํ•  ์ˆ˜ ์žˆ๋Š”์ง€์— ์—ฌ๋ถ€๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ binary mask ฮผa(sa,se)โˆˆ{โŠค,โŠฅ} \mu^a(s^a,s^e)\in\{\top,\bot\}ฮผa(sa,se)โˆˆ{โŠค,โŠฅ}๋ฅผ ํ†ตํ•ด ๊ฒฐ์ •๋ฉ๋‹ˆ๋‹ค. agent aaa๋Š” ํ•ญ์ƒ ์ž๊ธฐ์ž์‹ ์„ ๊ด€์ฐฐํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œ๊ธฐํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ฮผa(sa,sa)=โŠค,โˆ€aโˆˆA \mu^a(s^a,s^a) = \top,\forall a \in \mathcal{A}ฮผa(sa,sa)=โŠค,โˆ€aโˆˆA. agent๊ฐ€ ๋ณผ์ˆ˜ ์žˆ๋Š” ๋ชจ๋“  entities๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜ ํ•ฉ๋‹ˆ๋‹ค.

Msa={eโˆฃฮผa(sa,se)}โІฮพ\mathcal{M}^a_s = \{e|\mu^a(s^a,s^e)\} \subseteq \xi Msaโ€‹={eโˆฃฮผa(sa,se)}โІฮพ.

agent์˜ ๋ชจ๋“  observation์€ deterministicํ•œ observation function O(s,a) O(s,a)O(s,a)๋ฅผ ํ†ตํ•ด ๊ฒฐ์ •๋˜๋Š”๋ฐ O(s,a) O(s,a)O(s,a)๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ •์˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

oa=O(s,a)={seโˆฃeโˆˆMsa}โˆˆZ o^a = O(s,a) = \{s^e|e\in \mathcal{M}^a_s\} \in \mathcal{Z}oa=O(s,a)={seโˆฃeโˆˆMsaโ€‹}โˆˆZ

agent๋“ค์˜ ๋ชฉํ‘œ๋Š” expected discount reward์˜ ์ตœ๋Œ€ํ™”์ด๊ณ , ์ด๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

maxโกRt=โˆ‘tโ€ฒ=tTฮณtโ€ฒโˆ’tr(stโ€ฒ,utโ€ฒ,env) \max R_t = \sum^T_{t'=t}\gamma^{t'-t}r(s_{t'},\bold{u}_{t',env})maxRtโ€‹=โˆ‘tโ€ฒ=tTโ€‹ฮณtโ€ฒโˆ’tr(stโ€ฒโ€‹,utโ€ฒ,envโ€‹)

์ด๋•Œ, joint policy ฯ€(uenvโˆฃs) \pi(\bold{u}_{env}|s)ฯ€(uenvโ€‹โˆฃs)๋Š” ๋…๋ฆฝ์ ์ธ decentralized policies๋กœ ์‚ฌ์šฉํ•  ๊ฒƒ์ด๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œ๊ธฐํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ์ž์‹ ๋งŒ์˜ history๋ฅผ ํ†ตํ•ด action์„ ๊ฒฐ์ •ํ•˜๋Š” agent๋ผ๊ณ  ํ•ด์„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

ฯ€a(uenvaโˆฃฯ„a)\pi^a(u^a_{env}|\tau^a)ฯ€a(uenvaโ€‹โˆฃฯ„a)

๋˜ํ•œ agent group GโІA \mathcal{G} \subseteq\mathcal{A}GโІA์ผ ๋•Œ, joint action space ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‘œ๊ธฐ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

UenvG\mathcal{U}^\mathcal{G}_{env} UenvGโ€‹

๋งˆ์ง€๋ง‰์œผ๋กœ ์ค‘์š”ํ•œ ๊ฒƒ์€, MACKRL์„ ์œ„ํ•œ ์ •์˜๋“ค์ด ๋ฌธ์ œ๋ฅผ ๋‹จ์ˆœํ•˜๊ณ  ๊ฐ„๊ฒฐํ•˜๊ฒŒ ํ‘œํ˜„ํ•˜๊ธฐ ์œ„ํ•ด state๋ฅผ entities๋กœ ๋‚ผ ์ˆ˜ ์žˆ๊ณ , observation function์ด deterministicํ•˜๋‹ค๋Š” ๊ฐ€์ •์„ ํ•œ ๋‹จ์ˆœํ™”๋œ Dec-POMDP๋ผ๋Š” ์ ์ž…๋‹ˆ๋‹ค.

Previous4.2 Related WorkNext4.4 Common Knowledge

Last updated 4 years ago

Was this helpful?