๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page
  • Decentralized StarCraft Micromangement
  • State Features

Was this helpful?

  1. I Learning to Collaborate
  2. 3. Counterfactual Multi-Agent Policy Gradients

3.3 Multi-Agent StarCraft Micromanagement

Previous3.2 Related WorkNext3.4 Methods

Last updated 4 years ago

Was this helpful?

์ด๋ฒˆ section์—์„œ๋Š” COMA์—์„œ ์ •์˜ํ•œ StarCraft micromanagement problem์— ๋Œ€ํ•œ ์„ค๋ช…๊ณผ state์— ๋Œ€ํ•œ ์„ค๋ช…์„ ํ•  ์˜ˆ์ •์ž…๋‹ˆ๋‹ค.

Decentralized StarCraft Micromangement

์—ฌ๊ธฐ์„œ๋Š” ๊ฐ๊ฐ์˜ ์œ ๋‹›๋“ค์ด ์ ๊ณผ ์‹ธ์šฐ๊ธฐ ์œ„ํ•ด ์ด๋™ํ•˜๋Š” ๊ฒƒ๊ณผ ๊ณต๊ฒฉ์„ ๊ฐ decentralized agent์˜ action์œผ๋กœ ์ •์˜ํ•ฉ๋‹ˆ๋‹ค. ์ด ์‹คํ—˜์—์„œ๋Š” ์ด๋ ‡๊ฒŒ 3๋Œ€3 , 5๋Œ€5 ๋งˆ๋ฆฐ์ „, 5๋Œ€5 ์งˆ๋Ÿฟ์ „, 2๋“œ๋ผ๊ตฐ 3 ์งˆ๋Ÿฟ์ „์— ๋Œ€ํ•œ ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜๊ณ , ์ด๋•Œ ๋ฐ˜๋Œ€ํŽธ ์ƒ๋Œ€๋Š” starcraft๋‚ด์˜ heuristic rule based ai๋ฅผ ์ƒ๋Œ€ํ•ฉ๋‹ˆ๋‹ค.

action์œผ๋กœ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ discrete action set์„ ๊ฐ€์ง‘๋‹ˆ๋‹ค.

  • ์ด๋™(๋ฐฉํ–ฅ์œผ๋กœ ์ •์˜),

  • ๊ณต๊ฒฉ(target ๋ณ„ ์ •์˜)

  • ๋ฉˆ์ถค(stop)

  • ์•„๋ฌด๊ฒƒ๋„ ์•ˆํ•จ(noop)

์‹ค์ œ ๊ฒŒ์ž„์—์„œ๋Š” ์œ ๋‹›์ด ๊ณต๊ฒฉ์„ ํ•˜๊ธฐ ์œ„ํ•ด์„  ์ž๋™์œผ๋กœ ๊ณต๊ฒฉ๊ฐ€๋Šฅํ•œ ์‚ฌ๊ฑฐ๋ฆฌ๊นŒ์ง€ ์ด๋™ ํ›„ ๊ณต๊ฒฉํ•˜๋Š”๋ฐ,(๊ฒŒ์ž„ ๋‚ด ๋งŒ๋“ค์–ด์ง„ pathfinding route๋ฅผ ํ†ตํ•ด) ์ด๋Š” ๋ฌธ์ œ๋ฅผ ์ข€ ๋” ์‰ฝ๊ฒŒ ๋งŒ๋“ค์–ด ์ค๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—ฌ๊ธฐ์„  decentralized๋ฅผ ์ข€๋” ์˜๋ฏธ์žˆ๊ฒŒ ํ•˜๊ธฐ ์œ„ํ•ด, agent๋ณ„(unit๋ณ„) ์‹œ์•ผ๋ฅผ ๊ณต๊ฒฉ๊ฐ€๋Šฅ๋ฒ”์œ„๋กœ ์ œํ•œํ•˜์˜€์Šต๋‹ˆ๋‹ค. ๋“œ๋ผ๊ตฐ์˜ ์‹œ์•ผ๋ฒ”์œ„๋ฅผ ์˜ˆ๋กœ ๋‚˜ํƒ€๋‚ด์ž๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

์ด๋Ÿฌํ•œ decentralization๋ฅผ ํ†ตํ•ด ์ƒ๊ธฐ๋Š” ํŠน์ง•์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค.

  • agent๋“ค์ด ๋”์ด์ƒ fully observable state๋ฅผ ์ด์šฉํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค.

  • agent๋Š” ์˜ค์ง ์ ์ด ๋‚ด ๊ณต๊ฒฉ๊ฐ€๋Šฅ ๋ฒ”์œ„ ๋‚ด์— ์žˆ์„ ๋•Œ๋งŒ ๊ณต๊ฒฉํ•ฉ๋‹ˆ๋‹ค.(์—ฌ๊ธด ๋ฐ˜๋Œ€๋กœ ์“ฐ์—ฌ์ ธ์žˆ๋Š” ๊ฒƒ ๊ฐ™์€๋ฐ, ์ด ๋ง์ด ๋งž๋Š” ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค.) ๊ทธ๋ ‡๊ธฐ ๋•Œ๋ฌธ์— ๊ณต๊ฒฉ์„ ์œ„ํ•ด ์ด๋™ํ•ด์•ผํ•˜๋Š” ๋“ฑ์˜ built-in macro-action์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

  • agent๋Š” ์–ด๋–ค ์ ์ด ์ฃฝ์—ˆ๊ณ  ์–ด๋–ค์ ์ด ๋‚ด ์‹œ์•ผ์— ๋ฒ—์–ด๋‚ฌ๋Š”์ง€ ์ธ์‹ํ•˜์ง€ ๋ชปํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ ‡๊ธฐ ๋•Œ๋ฌธ์— ์ด๋Š” action space์˜ invalid choice๋ฅผ ์–ด๋–ป๊ฒŒ ์ฒ˜๋ฆฌํ•˜๋А๋ƒ๊ฐ€ ๋ฌธ์ œ๊ฐ€ ๋  ์ˆ˜ ์žˆ๋Š”๋ฐ ์ด๋Š” noop์œผ๋กœ ์ฒ˜๋ฆฌ๋ฉ๋‹ˆ๋‹ค.

์ด๋ ‡๊ฒŒ ์ข€ ๋” ์–ด๋ ค์›Œ์ง„ ํ™˜๊ฒฝ ๋•Œ๋ฌธ์— ์—ฌ๊ธฐ์„œ๋Š” ์ผ๋‹จ ์ ์€ ์œ ๋‹›๋งŒ์„ ๊ฐ€์ง€๊ณ  ์‹คํ—˜ํ•ฉ๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ๊ฒฐ๊ณผ์ ์œผ๋กœ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์Šน๋ฅ ์„ ์–ป์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋•Œ heuristic policy๋Š” agent๋“ค์„ ์ „์ง„ํ•˜๊ฒŒํ•˜๊ณ  ํ•œ ์œ ๋‹›์— ํ™”๋ ฅ์„ ์ง‘์ค‘ํ•ด ์ฃฝ์ด๋Š” ๋ฐฉ์‹์ธ๋ฐ ๊ฝค ์ผ๋ฆฌ์žˆ๋Š” rule์ž„์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. 5๋Œ€5 ๋งˆ๋ฆฐ์ „์˜ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์•„๋„ full observation ์ƒํ™ฉ์—์„œ 98%์˜ ์Šน๋ฅ ์„ ๋ณด์˜€๋Š”๋ฐ ์ด๋Š” Local observableํ•˜๊ฒŒ ๋˜์—ˆ์„ ๋•Œ 66%๊นŒ์ง€ ๋–จ์–ด์ง€๋Š” ๊ฒƒ์„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์‹คํ—˜์—์„œ ์ข‹์€ ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด์„  agent๋ผ๋ฆฌ ์œ„์น˜์„ ์ •์„ ์ž˜ํ•˜๊ณ , ํ™”๋ ฅ์„ ์ง‘์ค‘ํ•  ์ˆ˜ ์žˆ๊ณ , ์ ์ด ์ฃฝ์€๊ฑด์ง€ ์‹œ์•ผ๋ฐ–์œผ๋กœ๋‚˜๊ฐ„๊ฑด์ง€ ๊ตฌ๋ณ„์„ ์ž˜ ํ•ด๋‚ด์•ผ ํ•˜๋Š” ๋Šฅ๋ ฅ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค.

ํŒ€ ๋‚ด์˜ ๋ชจ๋“  agent๋Š” ๊ฐ™์€ global reward๋ฅผ ๋ฐ›์œผ๋ฉฐ, ์ƒ๋Œ€๋ฐฉ์—๊ฒŒ ๊ฐ€ํ•œ ๋ฐ๋ฏธ์ง€๋งŒํผ +, ๋ฐ›์€ ๋ฐ๋ฏธ์ง€์˜ ์ ˆ๋ฐ˜๋งŒํผ -, ์  ์œ ๋‹›์„ ์ฃฝ์˜€์„ ๋•Œ +10, ์ด๊ฒผ์„ ๋•Œ ์ „์ฒด ํŒ€ ์—๋„ˆ์ง€์— + 200์„ ๋ฐ›๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.

State Features

actor์™€ critic์ด ๋ฐ›๋Š” input features์— ๋Œ€ํ•œ ์„ค๋ช…์ž…๋‹ˆ๋‹ค. agent์™€ critic์ด ๋ฐ›๋Š” ์ •๋ณด๊ฐ€ ๋‹ค๋ฅธ๋ฐ, agent๊ฐ€ ๋ฐ›๋Š” local observation์œผ๋กœ๋Š” agent์˜ ์‹œ์•ผ ๋ฐ˜๊ฒฝ๋‚ด์˜ ์œ ๋‹›์— ๋Œ€ํ•œ ๊ฑฐ๋ฆฌ, ์ƒ๋Œ€์ ์ธ x,y, ์œ ๋‹›์˜ ํƒ€์ž…๊ณผ ์‹ค๋“œ๋Ÿ‰์ด ์žˆ๊ณ , ๋ชจ๋‘ normalized๋ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ํ˜„์žฌ agent๊ฐ€ ๊ณต๊ฒฉ ๋Œ€์ƒ์œผ๋กœ ์žก์€ ์œ ๋‹›์—๋Œ€ํ•œ ์ •๋ณด๋Š” ์•„๋ฌด๊ฒƒ๋„ ๋ฐ›์ง€์•Š์Šต๋‹ˆ๋‹ค.

critic์ด ๋ฐ›๋Š” global state๋Š” ๋ชจ๋“  ์œ ๋‹›์˜ ๋งต์˜ ์ค‘์•™๊ณผ์˜ ๊ฑฐ๋ฆฌ, ๋ชจ๋“ ์œ ๋‹›์˜ ์—๋„ˆ์ง€ ๋ฐ ๊ณต๊ฒฉ ์ฟจ๋‹ค์šด๋“ฑ์ด ๋“ค์–ด๊ฐ‘๋‹ˆ๋‹ค. ๋˜ํ•œ agent๋“ค์˜ local observation๋„ input์œผ๋กœ ๋ฐ›๋Š”๋ฐ, ์ด๋Š” ๋‹ค๋ฅธ ์ƒˆ๋กœ์šด ์ •๋ณด๋Š” ์—†์ง€๋งŒ agent๋ผ๋ฆฌ์˜ ์ƒ๋Œ€์ ์ธ ๊ฑฐ๋ฆฌ๋“ฑ์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ (global state์—์„œ ํ‘œํ˜„ํ•œ ๊ฒƒ๊ด€)๋‹ค๋ฅด๊ฒŒ ํ‘œํ˜„ํ•ด์„œ ์–ป๊ฒŒ๋ฉ๋‹ˆ๋‹ค.