SEMINAR

Provable Offline Reinforcement Learning: Neural Function Approximation, Randomization, and Sample Complexity

Speaker

Thanh Nguyen-Tang

Working
Johns Hopkins University
Timeline
Fri, Jan 13 2023 - 10:00 am (GMT + 7)
About Speaker

Thanh Nguyen-Tang is a postdoctoral research fellow in the Department of Computer Science at Johns Hopkins University. His research focuses on algorithmic and theoretical foundations of modern machine learning, aiming to build data-efficient, deployment-efficient, and robust AI systems. He has published his works in various top-tier conferences in machine learning including NeurIPS, ICLR, AISTATS, and AAAI. Thanh finished his Ph.D. in Computer Science at the Applied AI Institute at Deakin University, Australia.

Abstract

In this talk, Thanh will share some of his recent results on offline reinforcement learning (RL), an RL paradigm for domains where exploration is prohibitively expensive or even implausible, but a fixed dataset of previous experiences is available a priori. Specifically, he will focus on discussing how deep neural networks (trained by (stochastic) gradient descents) and randomization lead to a computationally efficient algorithm that has a strong theoretical guarantee for generalization across large state spaces under mild assumptions of distributional shifts while obtaining a favorable empirical performance. He will conclude with a discussion on future directions to make RL more data-efficient, deployment-efficient, and robust.

Related seminars

Fernando De la Torre

Carnegie Mellon University (CMU)

Human Sensing for AR/VR
Wed, Apr 24 2024 - 07:00 am (GMT + 7)

Anh Nguyen

Microsoft GenAI

The Revolution of Small Language Models
Fri, Mar 8 2024 - 02:30 pm (GMT + 7)

Thang D. Bui

Australian National University (ANU)

Recent Progress on Grokking and Probabilistic Federated Learning
Fri, Jan 26 2024 - 10:00 am (GMT + 7)

Tim Baldwin

MBZUAI, The University of Melbourne

LLMs FTW
Tue, Jan 9 2024 - 10:30 am (GMT + 7)