๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page

Was this helpful?

  1. 1. Introduction
  2. 1. INTRODUCTION

1.1 The Industrial Revolution, Cognition, and Computers

์ฆ๊ธฐ ๊ธฐ๊ด€๊ณผ ์‚ฐ์—… ํ˜๋ช…์€ ๋น ๋ฅด๊ฒŒ ์ธ๊ฐ„์„ ๊ธฐ๊ณ„๋กœ ๋Œ€์ฒดํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ธ๋ฅ˜ ์‚ฌํšŒ์™€ ์‚ฐ์—…์„ ํฌ๊ฒŒ ๋ฐ”๊พธ์—ˆ๊ณ , ๋งŽ์€ ์ด๋“ค์„ ์ƒ์‚ฐ์ง์—์„œ ์„œ๋น„์Šค์ง์œผ๋กœ ๋ฐ”๋€Œ์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์œก์ฒด์ ์ธ ํž˜๋ณด๋‹ค๋Š” ์ธ์ง€ ๋Šฅ๋ ฅ์ด ์ค‘์š”ํ•˜๊ฒŒ ๋œ ๊ฒƒ์ž…๋‹ˆ๋‹ค. ์ฆ๊ธฐ ๊ธฐ๊ด€, ํ™”์„ ์—ฐ๋ฃŒ ๋ฐ ๊ธฐํƒ€ ์—๋„ˆ์ง€์›์ด ๋ฌผ๋ฆฌ์  ๋…ธ๋™์„ ์œ„ํ•ด ๋‹ฌ์„ฑํ•œ ๊ฒƒ์ฒ˜๋Ÿผ ์ปดํ“จํŒ… ๊ธฐ์ˆ ์€ ์ธ๊ฐ„์˜ ์ธ์‹์„ ํ•„์š”๋กœ ํ•˜๋Š” ์ž‘์—…์— ๋Œ€ํ•ด ๋‹ฌ์„ฑํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๊ทธ๋Ÿผ์—๋„ ์ปดํ“จํ„ฐ์™€ ์‚ฌ์ด์—๋Š” ๊ต‰์žฅํ•œ ์ฐจ์ด๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด, ์ปดํ“จํ„ฐ์˜ ์—ฐ์‚ฐ์€ ์‹ค๋ฆฌ์ฝ˜์•ˆ์˜ deterministic binary gate๋ฅผ ์ด์šฉํ•ด ์—ฐ์‚ฐ์„ ํ•˜์ง€๋งŒ, ๋‡Œ๋Š” noisy biological neurones ์˜ probabilistic firing patterns๋ฅผ ํ†ตํ•œ ์—ฐ์‚ฐ์„ ํ•ฉ๋‹ˆ๋‹ค.

์—ญ์‚ฌ์ ์œผ๋กœ AI๋Š” Expert System์„ ๊ตฌํ˜„ํ•˜๋Š”๋ฐ ์ดˆ์ ์ด ๋งž์ถฐ์ง„ ์ ์ด ์žˆ๋Š”๋ฐ, ์ด๋Š” ์‚ฌ๋žŒ์—๊ฒ ๊ต‰์žฅํžˆ ์‚ฌ์†Œํ•œ์ผ ์ผ์ˆ˜๋„ ์žˆ๋Š” ์ผ๋„ ๊ธฐ๊ณ„์—๊ฒ ์–ด๋ ค์šด ์ผ์ž„์ด ์ฆ๋ช…๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

๊ทธ๋Ÿฌ๋˜ ์ค‘, Machine Learning(ML)์ด ๊ธฐ๊ณ„์—๊ฒŒ ์ธ์ง€์ ์ธ ๋Šฅ๋ ฅ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋Š” ๋Œ€์•ˆ์ด ๋˜์—ˆ๋Š”๋ฐ, ์ธ๊ฐ„์ด ๋”์ด์ƒ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ๋ฃฐ์„ ์ง€์ •ํ•ด์ฃผ์ง€ ์•Š๊ณ ๋„, ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋Š” ๋ฃฐ๊ณผ ์ถฉ๋ถ„ํ•œ ๋ฐ์ดํ„ฐ๋งŒ์œผ๋กœ ์ด๋ฅผ ๊ฐ€๋Šฅ์ผ€ ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ง€๋‚œ 30์—ฌ๋…„๊ฐ„ ๊ต‰์žฅํžˆ ๋งŽ์€ ๋ถ„์•ผ์— ์ ์šฉ๋˜์—ˆ์œผ๋ฉฐ, ๊ฐ€์žฅ ์ตœ๊ทผ์—” Deep Learning ๋ถ„์•ผ์—์„œ ํฐ ์„ฑ๊ณต๋“ค์„ ์ด๋ฃจ์—ˆ์Šต๋‹ˆ๋‹ค. ์ด๋Š” ์ฃผ๋กœ ๋งŽ์€ ์–‘์˜ ๋ฐ์ดํ„ฐ์™€ ๊ทธ์— ๋งž๋Š” ๊ฒฐ๊ณผ๊ฐ’์„ ๊ฐ€์ง€๊ณ  ์žˆ๋Š” Supervised Learning(SL)์ด๋ผ๊ณ  ๋ถˆ๋ฆฌ์šฐ๋Š”๋ฐ, SL์˜ ์ค‘์š”ํ•œ ๊ฐ€์ •์ค‘์— ํ•˜๋‚˜๋Š” ๋ฐ์ดํ„ฐ๋ผ๋ฆฌ์˜ independentํ•จ ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ ‡๊ธฐ์— ์ƒ๊ฐ๋ณด๋‹ค ๋งŽ์€ ํ˜„์‹ค ๋ฌธ์ œ๋ฅผ ํ‘ธ๋Š”๋ฐ ์ œ์•ฝ์ด ๋ฉ๋‹ˆ๋‹ค. ์–ด๋–ค ๊ฒฐ์ •์— ๋”ฐ๋ผ ๋ฏธ๋ž˜์˜ ๋ฐ์ดํ„ฐ๊ฐ€ ๋ฐ”๋€๋‹ค๋ฉด, SL์˜ ๊ฐ€์ •์„ ์œ„๋ฐ˜ํ•˜๋Š”๋ฐ, ์˜ˆ๋ฅผ ๋“ค๋ฉด ์ž์œจ์ฃผํ–‰ ์ž๋™์ฐจ๋Š” ์–ด๋А ๋ฐฉํ–ฅ์œผ๋กœ ๊ฐ€๋А๋ƒ์— ๋”ฐ๋ผ ์‹œ์‹œ๊ฐ๊ฐ ๋“ค์–ด์˜ค๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ๋‹ค๋ฅผ ๊ฒƒ ์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ ์ถ”์ฒœ์‹œ์Šคํ…œ์—์„œ๋„ ์œ ์ €๊ฐ€ ์ถ”์ฒœ๋ฐ›์€ ์ƒํ™ฉ์—์„œ ๊ทธ ์ถ”์ฒœ์„ ์–ด๋–ป๊ฒŒ ์ด์šฉํ•˜๋А๋ƒ์— ๋”ฐ๋ผ ๊ณ„์†ํ•ด์„œ ์ƒํ™ฉ์€ ๋ณ€ํ™”ํ•  ๊ฒƒ์ž…๋‹ˆ๋‹ค.

์ด๋Ÿฐ ์ƒํ™ฉ๋“ค์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๋– ์˜ค๋ฅธ ๊ฒƒ์ด Reinforcement Learning(RL)์ž…๋‹ˆ๋‹ค. RL์—์„œ์˜ ํ–‰๋™์ฃผ์ฒด๋ฅผ ๋ณดํ†ต agent๋ผ๊ณ  ์ •์˜ํ•˜๋Š”๋ฐ, ์ด agent๊ฐ€ ํ™˜๊ฒฝ(environment)๊ณผ ์ƒํ˜ธ์ž‘์šฉํ•˜๋ฉฐ ์–ด๋– ํ•œ ๋ฐ”๋žŒ์งํ•œ ํ–‰๋™์„ ํ•˜๋„๋ก ์šฐ๋ฆฌ๋Š” ๋งŒ๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด agent๋Š” environment์—์„œ ์–ด๋А ํ•œ ์‹œ์ ์— ์ฃผ์–ด์ง€๋Š” ์ƒํ™ฉ(state)๋ฅผ ๋ฐ›์•„ ํ–‰๋™(action)์„ ์ทจํ•˜๊ณ  ๊ทธ์— ๋”ฐ๋ฅธ ๋ณด์ƒ(reward)๋ฅผ ๋ฐ›๊ฒŒ๋ฉ๋‹ˆ๋‹ค. ์ด ๋•Œ, agent์˜ action์— ๋”ฐ๋ผ ๋‹น์žฅ์˜ reward๊ฐ€ ๋ฐ”๋€Œ๋Š” ๊ฒƒ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๋‹ค์Œ state์— ๋Œ€ํ•œ ๋ถ„ํฌ๋„ ๋ฐ”๊ฟ€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. agent๋Š” ์ด๋ฅผ ์‚ฌ์ „์— ์ฃผ์–ด์ง„ ์ง€์‹์ด๋‚˜ ๋ฃฐ์ด ์•„๋‹Œ environment์™€ ์ง์ ‘ ์ƒํ˜ธ์ž‘์šฉํ•˜๋ฉฐ ๋ฐฐ์›Œ๊ฐ‘๋‹ˆ๋‹ค.

Previous1. INTRODUCTIONNext1.2 Deep Multi-Agent Reinforcement-Learning

Last updated 4 years ago

Was this helpful?