Hierarchical and Continual RL with Doina Precup - #567

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - A podcast by Sam Charrington - Luni

Categories:

Today we’re joined by Doina Precup, a research team lead at DeepMind Montreal, and a professor at McGill University. In our conversation with Doina, we discuss her recent research interests, including her work in hierarchical reinforcement learning, with the goal being agents learning abstract representations, especially over time. We also explore her work on reward specification for RL agents, where she hypothesizes that a reward signal in a complex environment could lead an agent to develop attributes of intuitive intelligence. We also dig into quite a few of her papers, including On the Expressivity of Markov Reward, which won a NeruIPS 2021 outstanding paper award. Finally, we discuss the analogy between hierarchical RL and CNNs, her work in continual RL, and her thoughts on the evolution of RL in the recent past and present, and the biggest challenges facing the field going forward. The complete show notes for this episode can be found at twimlai.com/go/567

Visit the podcast's native language site