Reinforcement and mean-field games in algorithmic trading - Sebastian Jaimungal

Описание к видео Reinforcement and mean-field games in algorithmic trading - Sebastian Jaimungal

Prof. Sebastian Jaimungal, University of Toronto, will give a talk at the Alan Turing Institute on two areas of his research in algorithmic trading: reinforcement learning and mean-field games with differing beliefs.

Prof. Jaimungal is the Director of the professional Masters of Financial Insurance program in the Department of Statistical Sciences, and teaches in the Mathematical Finance Program at the University of Toronto, as well as the PhD and MSc programs in the Department of Statistical Sciences. He is also the Chair for the SIAM activity group in Financial Mathematics and Engineering.

About the event
Part 1: Reinforcement Learning in Algorithmic Trading

Reinforcement learning aims to solve certain stochastic control problems without making explicit assumptions on the dynamics of the environment or on the effect that an agent’s actions has on its dynamics. In this talk, I will provide an overview of two approaches: (i) double deep Q-learning, and (ii) reinforced deep Kalman filters for algorithmic trading. Deep Q-learning approximates the action-value function with a neural net and aims to solve the Bellman equation through learning by acting in the environment and updating the network parameters. Reinforced Deep Kalman Filters on the other hand, takes a batch reinforcement learning perspective and aims to maximize the rewards directly by learning a latent model and updating that model as data arrives and the agent takes actions. Some sample results on real data will be shown.

Part 2: Mean-Field Games with Differing Beliefs for Algorithmic Trading

Even when confronted with the same data, agents often disagree on a model of the real-world. Here, we address the question of how interacting heterogenous agents, who disagree on what model the real-world follows, optimize their trading actions. The market has latent factors that drive prices, and agents account for the permanent impact they have on prices. This leads to a large stochastic game, where each agents' performance criteria is computed under a different probability measure.

Комментарии

Информация по комментариям в разработке