CoG 2021: Adversarial Reinforcement Learning for Procedural Content Generation

Описание к видео CoG 2021: Adversarial Reinforcement Learning for Procedural Content Generation

This research paper was presented at the IEEE Conference on Games 2021.

Download the paper here: https://www.ea.com/seed/news/cog2021-...

We present an approach for procedural content generation (PCG), and improving generalization in reinforcement learning (RL) agents, by using adversarial deep RL.

Training RL agents for generalization over novel environments is a notoriously difficult task. One popular approach is to procedurally generate different environments to increase the generalizability of the trained agents. Here, we deploy an adversarial model with one PCG RL agent (called Generator), and one solving RL agent (called Solver).

The benefit is mainly two-fold: First, the Solver achieves better generalization through the generated challenges from the Generator. Second, the trained Generator can be used as a creator of novel environments that, together with the Solver, can be shown to be solvable. The Generator receives a reward signal based on the performance of the Solver which encourages the environment design to be challenging but not impossible.

----------

SEED is a pioneering group within Electronic Arts, combining creativity with applied research. We explore, build, and help determine the future of interactive entertainment.

Learn more about SEED at https://seed.ea.com

Find us on:

Twitter:   / seed  
LinkedIn:   / seed-ea  

Комментарии

Информация по комментариям в разработке