5.1 Introduction

Chapter 3,4๋ฅผ ํ†ตํ•ด on-policy MARL์— ๋Œ€ํ•ด ๋ฐฐ์›Œ๋ณด์•˜์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ RL์ž์ฒด์—์„œ ํ•ด๊ฒฐํ•ด์•ผํ•˜๋Š” ํ•„์—ฐ์ ์ธ ๋ฌธ์ œ์ค‘ ํ•˜๋‚˜๋Š” Sample efficiency์ž…๋‹ˆ๋‹ค. ์ด ๋•Œ, on-policy๋Š” ํ•„์—ฐ์ ์œผ๋กœ ๊ฐ™์€ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด ์—ฌ๋Ÿฌ๋ฒˆ ํ•™์Šตํ•˜๊ฑฐ๋‚˜, ์—ฌ๋Ÿฌ๊ฐ€์ง€ policy๋กœ ๋ถ€ํ„ฐ ๋ฐฐ์šธ ์ˆ˜ ์žˆ๋Š” off-policy๋ณด๋‹ค Sample-efficiency๊ฐ€ ๋‚ฎ์„ ์ˆ˜ ๋ฐ–์— ์—†๊ณ  ์ด๋Š” MARL์—์„œ์˜ off-policy๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ๋ฉด์œผ๋กœ ์—ฐ๊ตฌ๋ฅผ ์ผ์œผํ‚ฌ ์ˆ˜๋ฐ–์— ์—†์—ˆ์Šต๋‹ˆ๋‹ค.

off-policy์˜ ๋Œ€ํ‘œ์ ์ธ algorithm์ธ DQN์˜ MARL์— ๋Œ€ํ•œ ์ ์šฉ์€ IQL์ž…๋‹ˆ๋‹ค. ์ด ๋•Œ, ํ™˜๊ฒฝ์— ์กด์žฌํ•˜๋Š” ๋‹ค๋ฅธ agent๋“ค์„ ๋ชจ๋‘ ์ •์ ์ธ ์กด์žฌ๋กœ ์ทจ๊ธ‰ํ•ด ํ•ด๊ฒฐ์„ ํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์ˆ˜๋ ด์„ ๋ณด์žฅํ•  ์ˆ˜๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋‹คํ–‰ํžˆ๋„ ์‹ค์ „์ ์œผ๋กœ ๋ช‡๊ฐœ์˜ ์‹คํ—˜์— ๋Œ€ํ•ด์„œ๋Š” IQL์ด ๊ฝค ๊ดœ์ฐฎ์€ ์„ฑ๋Šฅ์„ ๋ณด์ž„์„ ๋ณด์•˜์Šต๋‹ˆ๋‹ค.

RL์—์„œ์˜ ํฐ ๋ฐœ์ „์„ ์ด๋Œ์—ˆ๋˜ ์š”์†Œ์ค‘ ํ•˜๋‚˜์— Replay memory๋ฅผ ๋นผ๋†“์„ ์ˆ˜ ์—†๋Š”๋ฐ, ์ด๋Š” data๋ฅผ iid๋กœ ๋งŒ๋“ค์–ด Neural Network์˜ ํ•™์Šต์•ˆ์ •์„ฑ์— ๋„์›€์„ ์ค„ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ efficiency๋„ ๋†’์—ฌ์ค๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋Ÿฌํ•œ Replay Memory๋„ IQL์— ์ ์šฉํ•˜๊ธฐ์—๋Š” ๋ฌธ์ œ์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. MARL์ƒํ™ฉ์—์„œ์˜ Replay Memory๋‚ด์˜ data๋“ค์€ ํ˜„์žฌํ™˜๊ฒฝ์˜ dynamic์„ ํ‘œํ˜„ํ•˜๊ธฐ ์–ด๋ ต์Šต๋‹ˆ๋‹ค. ๋‹ค๋ฅธ agent๋“ค์— ์˜ํ•ด non-stationaryํ•˜๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ IQL์„ ๋ณด๋ฉด ์ ์ง„์ ์œผ๋กœ ์–ด๋–ป๊ฒŒ๋“  ๋ฐฐ์šฐ๋Š” ๊ฒฝํ–ฅ์€ ์žˆ์œผ๋‚˜, non-stationaryํ•œ data๋ฅผ ๊ณ„์† samplingํ•ด ๊ทธ๋ƒฅ ์—…๋ฐ์ดํŠธํ•˜๋Š” ํ–‰์œ„๋Š” ๊ฒฐ๊ณผ์ ์œผ๋กœ agent์˜ ํ•™์Šต์— ํฐ ์žฅ์• ๋ฌผ์ด ์•„๋‹ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค.

์ด๋Ÿฌํ•œ ๋ฌธ์ œ์ ์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ์ด์ „์—” Replay Memory ํฌ๊ธฐ๋ฅผ ์ž‘๊ฒŒ ์œ ์ง€ํ•ด ์ตœ๊ทผ์˜ ๋ฐ์ดํ„ฐ๋งŒ ์‚ฌ์šฉํ•˜๋Š”๋“ฑ sample efficiency๋ฅผ ๋‚ฎ์ถ”๊ณ , ๊ทผ๋ณธ์ ์œผ๋กœ MARL์˜ stability๋ฅผ ์œ ์ง€ํ•˜๋ฉฐ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด์„œ๋Š” ์„ค๋ช…ํ•˜์ง€ ๋ชปํ–ˆ์Šต๋‹ˆ๋‹ค. ๊ฒฐ๊ตญ IQL์—์„œ์˜ Replay Memory ์ ์šฉ์„ ์–ด๋–ป๊ฒŒ ์‹œํ‚ฌ์ง€๊ฐ€ ๋˜ ํ•ด๊ฒฐํ•ด์•ผํ•  ์–ด๋ ค์šด ๋ฌธ์ œ๋กœ ๋‚จ๊ฒŒ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

์ด chapter์—์„œ๋Š” Replay Memory๋ฅผ MARL์— ์ ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋‘๊ฐ€์ง€ ํšจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•ฉ๋‹ˆ๋‹ค.

  • ์ฒซ์งธ๋กœ, Replay Memory๋‚ด์˜ data๋ฅผ off-environment data๋กœ ์ทจ๊ธ‰ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. off policy์—์„œ๋Š” policy์— ์˜ํ•ด ๋“ฑ์žฅํ•˜๋Š” state distribution์˜ ์ฐจ์ด ๋•Œ๋ฌธ์— Importance Sampling์„ ์‚ฌ์šฉํ–ˆ๋‹ค๋ฉด, ์ด๋ฒˆ์—๋Š” agent ์ž…์žฅ์—์„œ์˜ ๋‹ค๋ฅธ agent๋“ค์˜ joint action์— ๋Œ€ํ•ด distribution์ด ๋‹ฌ๋ผ์ ธ ๊ทธ์— ๋Œ€ํ•œ Importance Sampling์„ ์ง„ํ–‰ํ•ฉ๋‹ˆ๋‹ค.

  • ๋‘˜์งธ๋กœ, Hyper Q-learning์— ์˜ํ•ด ์˜๊ฐ์„ ๋ฐ›์€ ์ ‘๊ทผ๋ฒ•์„ ์†Œ๊ฐœํ•˜๋Š”๋ฐ, ์ด๋Š” ๊ฐ agent๊ฐ€ ๋‹ค๋ฅธ agent์˜ policy๋“ค์„ ๊ด€์ฐฐํ•˜๋ฉฐ ์ถ”์ •ํ•˜์—ฌ non-stationary๋ฅผ ํ”ผํ•ฉ๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด์— Q-function์˜ space๊ฐ€ ์ปค์งˆ ๋•Œ ์ด๋ฅผ ๊ฐ๋‹นํ•  ์ˆ˜ ์—†๋Š”๋ฐ, ์—ฌ๊ธฐ์„œ๋Š” ์ž‘์€ ์ฐจ์›์˜ fingerprint๋ฅผ ํ†ตํ•ด ์ด์ „์˜ ํ•œ๊ณ„๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค.

๊ทธ๋ฆฌ๊ณ  ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•๋“ค์— ๋Œ€ํ•ด Starcraft unit micromanagement ํ™˜๊ฒฝ์—์„œ ์„ฑ๊ณต์ ์ธ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ค๋‹ˆ๋‹ค.

Last updated