๐Ÿ˜‡
Deep Multi-Agent Reinforcement Learning
  • Deep Multi-Agent Reinforcement Learning
  • Abstract & Contents
    • Abstract
  • 1. Introduction
    • 1. INTRODUCTION
      • 1.1 The Industrial Revolution, Cognition, and Computers
      • 1.2 Deep Multi-Agent Reinforcement-Learning
      • 1.3 Overall Structure
  • 2. Background
    • 2. BACKGROUND
      • 2.1 Reinforcement Learning
      • 2.2 Multi-Agent Settings
      • 2.3 Centralized vs Decentralized Control
      • 2.4 Cooperative, Zero-sum, and General-Sum
      • 2.5 Partial Observability
      • 2.6 Centralized Training, Decentralized Execution
      • 2.7 Value Functions
      • 2.8 Nash Equilibria
      • 2.9 Deep Learning for MARL
      • 2.10 Q-Learning and DQN
      • 2.11 Reinforce and Actor-Critic
  • I Learning to Collaborate
    • 3. Counterfactual Multi-Agent Policy Gradients
      • 3.1 Introduction
      • 3.2 Related Work
      • 3.3 Multi-Agent StarCraft Micromanagement
      • 3.4 Methods
        • 3.4.1 Independent Actor-Critic
        • 3.4.2 Counterfactual Multi-Agent Policy Gradients
        • 3.4.2.1 baseline lemma
        • 3.4.2.2 COMA Algorithm
      • 3.5 Results
      • 3.6 Conclusions & Future Work
    • 4 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.1 Introduction
      • 4.2 Related Work
      • 4.3 Dec-POMDP and Features
      • 4.4 Common Knowledge
      • 4.5 Multi-Agent Common Knowledge Reinforcement Learning
      • 4.6 Pairwise MACKRL
      • 4.7 Experiments and Results
      • 4.8 Conclusion & Future Work
    • 5 Stabilizing Experience Replay
      • 5.1 Introduction
      • 5.2 Related Work
      • 5.3 Methods
        • 5.3.1 Multi-Agent Importance Sampling
        • 5.3.2 Multi-Agent Fingerprints
      • 5.4 Experiments
        • 5.4.1 Architecture
      • 5.5 Results
        • 5.5.1 Importance Sampling
        • 5.5.2 Fingerprints
        • 5.5.3 Informative Trajectories
      • 5.6 Conclusion & Future Work
  • II Learning to Communicate
    • 6. Learning to Communicate with Deep Multi-Agent ReinforcementLearning
      • 6.1 Introduction
      • 6.2 Related Work
      • 6.3 Setting
      • 6.4 Methods
        • 6.4.1 Reinforced Inter-Agent Learning
        • 6.4.2 Differentiable Inter-Agent Learning
      • 6.5 DIAL Details
      • 6.6 Experiments
        • 6.6.1 Model Architecture
        • 6.6.2 Switch Riddle
        • 6.6.3 MNIST Games
        • 6.6.4 Effect of Channel Noise
      • 6.7 Conclusion & Future Work
    • 7. Bayesian Action Decoder
      • 7.1 Introduction
      • 7.2 Setting
      • 7.3 Method
        • 7.3.1 Public belief
        • 7.3.2 Public Belief MDP
        • 7.3.3 Sampling Deterministic Partial Policies
        • 7.3.4 Factorized Belief Updates
        • 7.3.5 Self-Consistent Beliefs
      • 7.4 Experiments and Results
        • 7.4.1 Matrix Game
        • 7.4.2 Hanabi
        • 7.4.3 Observations and Actions
        • 7.4.4 Beliefs in Hanabi
        • 7.4.5 Architecture Details for Baselines and Method
        • 7.4.6 Hyperparamters
        • 7.4.7 Results on Hanabi
      • 7.5 Related Work
        • 7.5.1 Learning to Communicate
        • 7.5.2 Research on Hanabi
        • 7.5.3 Belief State Methods
      • 7.6 Conclusion & Future Work
  • III Learning to Reciprocate
    • 8. Learning with Opponent-Learning Awareness
      • 8.1 Introduction
      • 8.2 Related Work
      • 8.3 Methods
        • 8.3.1 Naive Learner
        • 8.3.2 Learning with Opponent Learning Awareness
        • 8.3.3. Learning via Policy gradient
        • 8.3.4 LOLA with Opponent modeling
        • 8.3.5 Higher-Order LOLA
      • 8.4 Experimental Setup
        • 8.4.1 Iterated Games
        • 8.4.2 Coin Game
        • 8.4.3 Training Details
      • 8.5 Results
        • 8.5.1 Iterated Games
        • 8.5.2 Coin Game
        • 8.5.3 Exploitability of LOLA
      • 8.6 Conclusion & Future Work
    • 9. DiCE: The Infinitely Differentiable Monte Carlo Estimator
      • 9.1 Introduction
      • 9.2 Background
        • 9.2.1 Stochastic Computation Graphs
        • 9.2.2 Surrogate Losses
      • 9.3 Higher Order Gradients
        • 9.3.1 Higher Order Gradient Estimators
        • 9.3.2 Higher Order Surrogate Losses
        • 9.3.3. Simple Failing Example
      • 9.4 Correct Gradient Estimators with DiCE
        • 9.4.1 Implement of DiCE
        • 9.4.2 Casuality
        • 9.4.3 First Order Variance Reduction
        • 9.4.4 Hessian-Vector Product
      • 9.5 Case Studies
        • 9.5.1 Empirical Verification
        • 9.5.2 DiCE For multi-agent RL
      • 9.6 Related Work
      • 9.7 Conclusion & Future Work
  • Reference
    • Reference
  • After
    • ๋ณด์ถฉ
    • ์—ญ์ž ํ›„๊ธฐ
Powered by GitBook
On this page

Was this helpful?

  1. III Learning to Reciprocate
  2. 8. Learning with Opponent-Learning Awareness

8.1 Introduction

์ด์ „ ์žฅ์—์„œ๋Š” ์˜ค๋กœ์ง€ fully cooperative multi-agent RL์— ๋Œ€ํ•ด์„œ ๋ฐฐ์› ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋‹น์—ฐํ•˜๊ฒŒ๋„ Multi-Agent์—์„œ์˜ ๋ฌธ์ œ๋Š” ํ•ญ์ƒ cooperativeํ•˜์ง€ ์•Š๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. Hierarchical Reinforcement Learning์ด๋‚˜, Generative Adversarial Network์™€ Decentralized Optimization๊ฐ™์€ ๋ถ„์•ผ๋“ค๋„ Multi-agent Problem์œผ๋กœ ๋ณผ ์ˆ˜ ์žˆ๋Š”๋ฐ, ์ด๋Ÿฌํ•œ ๋ชจ๋“  ์„ค์ •์—์„œ ํŠนํžˆ Trainable Object๊ฐ„์˜ ๋‹ค๋ฅธ objective๋ฅผ ๋ฐฐ์šธ ๋•Œ, ๋ฌธ์ œ๊ฐ€ non-stationaryํ•ด์ง€๊ณ , ๋ถˆ์•ˆ์ •ํ•ด์ง€๊ฑฐ๋‚˜, ๋ฐ”๋ผ์ง€ ์•Š์•˜๋˜ ๊ฒฐ๊ณผ๋ฅผ ์–ป๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.

๋‹ค์–‘ํ•˜๊ณ  ๋ณต์žกํ•œ ์ƒํ™ฉ์—์„œ๋„ ํ˜‘๋ ฅ์„ ์œ ์ง€ํ•˜๋Š” ๋Šฅ๋ ฅ์€ ์ธ๊ฐ„ ์‚ฌํšŒ์˜ ์„ฑ๊ณต์— ํฐ ๊ธฐ์—ฌ๋ฅผ ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ „์Ÿ ์†์—์„œ๋„ ์ด๋Ÿฌํ•œ ๋Šฅ๋ ฅ์€ ๊ด€์ฐฐ๋˜๊ณค ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋ฏธ๋ž˜์— AI agent๊ฐ€ ์ธ๊ฐ„์‚ฌํšŒ์—์„œ ๋ถ€๋ถ„์ ์œผ๋กœ ํ˜‘๋ ฅํ•ด์•ผํ•˜๋Š” ์ƒํ™ฉ์—์„œ ์ ๊ทน์ ์œผ๋กœ ํ™œ์šฉ์ด ๋  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€๋˜๋Š”๋ฐ, ์ด ๋•Œ agent์˜ ํ•™์Šต ์‹คํŒจ๋Š” ํฐ ์žฌ์•™์ด ๋  ๊ฒƒ์ž…๋‹ˆ๋‹ค.

๊ฐ์ž์˜ reward๋ฅผ ์ตœ๋Œ€ํ™” ํ•˜๋ ค๋Š” agent๋“ค ๋ผ๋ฆฌ์˜ ์ƒํ˜ธ ํ˜‘๋ ฅ์€ ์–ด๋–ป๊ฒŒ ์ผ์–ด๋‚˜๋Š” ๊ฒƒ์ธ์ง€์— ๋Œ€ํ•ด ๋Œ€ํ•ด์„œ๋„ ๋งŽ์€ ๊ถ๊ธˆ์ฆ์ด ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค. ํŠนํžˆ ๊ฒŒ์ž„ ์ด๋ก ์—์„œ ํ˜‘๋™์ ์ด๊ณ  ๊ฒฝ์Ÿ์ ์ธ ์š”์†Œ๋ฅผ ํฌํ•จํ•˜๋Š” ๊ฒŒ์ž„์˜ ํ•™์Šต ๊ฒฐ๊ณผ๋ฅผ ์—ฐ๊ตฌํ•œ ์˜ค๋žœ ์—ญ์‚ฌ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋Œ€ํ‘œ์ ์œผ๋กœ ํ˜‘๋ ฅ๊ณผ ๋ณ€์ ˆ์— ๋Œ€ํ•ด iterated prisoner's dilemma ๋ฌธ์ œ์˜ ์˜ˆ๊ฐ€ ์žˆ์Šต๋‹ˆ. ์ด ๊ฒŒ์ž„์—์„œ์˜ ์ด๊ธฐ์ฃผ์˜๋Š” ๋ชจ๋“  agents์˜ ์ „์ฒด์ ์ธ reward์˜ ๊ฐ์†Œ๋กœ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ, ํ˜‘๋ ฅ์€ ์ „์ฒด์˜ reward๋ฅผ ์ข‹๊ฒŒ ๋งŒ๋“ญ๋‹ˆ๋‹ค.

ํฅ๋ฏธ๋กญ๊ฒŒ๋„, ์ด๋Ÿฌํ•œ ๊ฐ„๋‹จํ•œ ์ฃ„์ˆ˜์˜ ๋”œ๋ ˆ๋งˆ ๋ฌธ์ œ์—์„œ๋„, ๋งŽ์€ MARL algorithm๋Š” ๋ชจ๋‘ ์ตœ์•…์˜ ์ƒํ™ฉ์„ ๋งˆ์ฃผํ•˜๋„๋ก ํ•™์Šต์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š”๋ฐ, ์ด๋Š” ํ˜„์žฌ SOTA ๋˜ํ•œ ์ด๋Ÿฌํ•œ ๊ฐ„๋‹จํ•œ ํ˜‘๋ ฅ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค. ์ด๋Š” ๋‹ค๋ฅธ agent๋“ค์„ ๋‹จ์ง€ ํ™˜๊ฒฝ์˜ ์ผ๋ถ€๋ถ„์ด๋ผ๊ณ  ์ƒ๊ฐํ•˜๋Š” ๊ฒƒ์ด ๋ฌธ์ œ๊ฐ€ ๋ฉ๋‹ˆ๋‹ค.

๋‹ค๋ฅธ agents์˜ ํ•™์Šตํ•˜๋Š” ํ–‰๋™์— ๋Œ€ํ•œ ์˜๋ฏธ๋ฅผ ์ถ”๋ก ํ•˜๋Š” ๋‹จ๊ณ„๋กœ์จ, ์—ฌ๊ธฐ์„œ๋Š” Learning with Opponent-Learning Awareness(LOLA)๋ฅผ ์ œ์•ˆํ•ฉ๋‹ˆ๋‹ค. LOLA๋Š” ๋‹ค๋ฅธ agent์˜ parameter update๊ฐ€ ๋‹ค๋ฅธ agent์˜ ํ•™์Šต์— ์–ด๋–ค ์˜ํ–ฅ์„ ๋ผ์น˜๋Š”์ง€ ์„ค๋ช…ํ•  ์ˆ˜ ์žˆ๋Š” ์ถ”๊ฐ€์ ์ธ term์„ ๊ฐ€์ง„ ํ•™์Šต ๋ฃฐ์„ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. (๋ฐ˜๋ณต๋œ ์„ค๋ช…์„ ๋ง‰๊ธฐ์œ„ํ•ด zero-sum ์ƒํ™ฉ์— ํ•œ์ •๋˜์ง€ ์•Š๋”๋ผ๋„ ์ƒ๋Œ€๋ฐฉ๋“ค์„ ๋ชจ๋‘ opponents๋ผ๊ณ  ํ‘œํ˜„ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.) ์—ฌ๊ธฐ์„œ๋Š” iterated prisoner's dilemma(IPD)์ƒํ™ฉ์—์„œ ๋ชจ๋“  agent์—๊ฒŒ ์ ์šฉ๋˜๋Š” ์ถ”๊ฐ€์ ์ธ term์„ ์ด์šฉํ•ด ์ƒํ˜ธ์ž‘์šฉํ•˜๊ณ  ํ˜‘๋ ฅํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•จ์„ ๋ณด์˜€์Šต๋‹ˆ๋‹ค. ๋˜, IPD์—์„œ ์‹คํ—˜์ ์œผ๋กœ ๋ณด์•˜์„ ๋•Œ, ๊ฐ agent๋Š” LOLA๊ฐ€ ์ถ”๊ฐ€์ ์ธ ๋ณด์ƒ์ด ์—†์ด๋„, naive learning์—์„œ LOLA๋กœ ๋ฐ”๊พธ๋Š” ๊ฒƒ์ด ์žฅ๋ ค๋œ๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์ž…๋‹ˆ๋‹ค. ์ด๋Š” ์ง€์—ญ์ ์œผ๋กœ LOLA๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“  agent๋Š” ์•ˆ์ •๋œ ํ‰ํ˜• ์ƒํƒœ์— ์ด๋ฃฐ ์ˆ˜ ์žˆ์Œ์„ ๋ณด์ž…๋‹ˆ๋‹ค. ๋˜ํ•œ LOLA agent๊ฐ€ round-robin tournament์—์„œ๋„ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์ด๋Š” ๊ฒƒ๋„ ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

LOLA๋ฅผ likelihood ratio policy gradients๋ฅผ ์‚ฌ์šฉํ•œ DMARL ์„ค์ •์— ์ ์šฉํ•˜๋Š”๋ฐ, ์ด๋Š” LOLA๊ฐ€ high-dimensional input๊ณผ parameter space์—์„œ๋„ ์ž˜ ์ ์šฉ๋  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์ž…๋‹ˆ๋‹ค.

IPD์™€ iterated matching pennies(IMP)์—์„œ LOLA์˜ policy gradient version์„ ๋ณด์˜€๋Š”๋ฐ, ์ผ๋ฐ˜์ ์ธ RL ์ ‘๊ทผ๋“ค ์‹คํŒจํ–ˆ์ง€๋งŒ LOLA๋Š” ์ „๋ฐ˜์ ์œผ๋กœ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋‚ผ ์ˆ˜ ์žˆ๊ฒŒ agent๋ผ๋ฆฌ ํ˜‘๋ ฅํ•˜๋„๋ก ์ด๋Œ์—ˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ LOLA๋ฅผ opponent policy๊ฐ€ ์–ด๋–ค ๊ฒƒ์ธ์ง€ ๋ชจ๋ฅผ ๋•Œ, ์ถ”๋ก ํ•ด์•ผํ•  ๋•Œ๋กœ๋„ ํ™•์žฅํ•˜์˜€์Šต๋‹ˆ๋‹ค.

๋งˆ์ง€๋ง‰์œผ๋กœ, grid-world task์— opponent modeling์ด ์žˆ๊ณ  ์—†๊ณ ์— ๋”ฐ๋ฅธ LOLA๋ฅผ ์ ์šฉํ•œ ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ์ด task๋Š” action space๊ฐ€ ํ™•์žฅ๋œ task์ด๊ณ , high-dimensional recurrent policies๋ฅผ ํ•„์š”๋กœ ํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด ์‹คํ—˜์—์„œ๋„, LOLA๋Š” ์ƒ๋Œ€์˜ policy๋ฅผ ๋ชจ๋ฅด๊ฑฐ๋‚˜ ์ธก์ •ํ•ด์•ผํ•  ๋•Œ๋„ ์ž˜ ํ˜‘๋ ฅํ–ˆ์Šต๋‹ˆ๋‹ค.

Previous8. Learning with Opponent-Learning AwarenessNext8.2 Related Work

Last updated 4 years ago

Was this helpful?