Bayesian Reinforcement Learning for Problems with State Uncertainty ---- Frans Oliehoek
- https://wsc.project.cwi.nl/ml-reading-group/events/bayesian-reinforcement-learning-for-problems-with-state-uncertainty-frans-oliehoek
- Bayesian Reinforcement Learning for Problems with State Uncertainty ---- Frans Oliehoek
- 2018-11-01T11:00:00+01:00
- 2018-11-01T12:00:00+01:00
- We are happy to have Frans Oliehoek of Delft University speak about Bayesian methods for reinforcement learning.
- When Nov 01, 2018 from 11:00 AM to 12:00 PM (Europe/Amsterdam / UTC100)
- Where L016
- Add event to calendar iCal
Sequential decision making under uncertainty is a challenging problem, especially when the decision maker, or agent, has uncertainty about what the true 'state' of the environment is. That is, in many applications the problem is 'partially observable': there are important pieces of information that are fundamentally hidden from the agent. Moreover, the problem gets even more complex when no accurate model of the environment is available. In such cases, the agent will need to update its belief over the environment, i.e., learn, during execution.
In this talk, I will introduce a formal way of modeling decision making under partial observability, as well as a more recent extension to the learning setting. I will explain how the learning problem can be tackled using a method called 'POMCP', how this can be made more efficient via a number of novel techniques, and how we can further increase its effectiveness by exploiting structure in the environment. Time permitting, I will also discuss extensions of this methodology that explicitly deal with coordination with other agents, and anticipation of other actors (such as humans) in the environment.
Frans Oliehoek is Associate Professor at the Interactive Intelligence group at TU Delft.