Self-Improving Generative Adversarial Reinforcement Learning

Liu, Yang, Zeng, Yifeng, Chen, Yingke, Tang, Jing and Pan, Yinghui (2019) Self-Improving Generative Adversarial Reinforcement Learning. In: AAMAS 2019: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. ACM International Conference on Autonomous Agents and Multiagent Systems. Proceedings . International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), Richland, pp. 52-60. ISBN 9781450363099

AAMAS2019.pdf - Accepted Version

Download (905kB) | Preview
Official URL:


The lack of data efficiency and stability is one of the main challenges in end-to-end model free reinforcement learning (RL) methods. Recent researches solve the problem resort to supervised learning methods by utilizing human expert demonstrations, e.g. imitation learning. In this paper we present a novel framework which builds a self-improving process upon a policy improvement operator, which is used as a black box such that it has multiple implementation options for various applications. An agent is trained to iteratively imitate behaviors that are generated by the operator. Hence the agent can learn by itself without domain knowledge from human. We employ generative adversarial networks (GAN) to implement the imitation module in the new framework. We evaluate the framework performance over multiple application domains and provide comparison results in support.

Item Type: Book Section
Uncontrolled Keywords: Reinforcement Learning, Generative Adversarial Nets, Imitation learning, Policy Iteration, Policy distillation,
Subjects: G400 Computer Science
G600 Software Engineering
Department: Faculties > Engineering and Environment > Computer and Information Sciences
Faculties > Business and Law > Newcastle Business School
Related URLs:
Depositing User: John Coen
Date Deposited: 07 Jul 2020 12:34
Last Modified: 31 Jul 2021 13:15

Actions (login required)

View Item View Item


Downloads per month over past year

View more statistics