Friday, Sep 09 2022 - 02:00 pm (GMT + 7)

AI Synthesis for Metaverse Capabilities & Nextgen AI VFX

About the speaker

Hao Li is CEO and Co-Founder of Pinscreen, a startup that builds cutting edge AI-driven virtual avatar technologies, as well as Associate Professor at the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). He was previously a Distinguished Fellow of the Computer Vision Group at UC Berkeley and Associate Professor of Computer Science at the University of Southern California, where he was also Director of the USC Institute for Creative Technologies. Hao's work in Computer Vision and Graphics focuses on 3D digitization and neural synthesis for immersive technologies and content creation. His research involves the development of novel deep learning, data-driven, and geometry processing algorithms. He is known for his seminal work in avatar creation, facial animation, hair digitization, dynamic shape processing, as well as his recent efforts in AI media synthesis and deepfake detection. He was also a visiting professor at Weta Digital, a research lead at Industrial Light & Magic / Lucasfilm, and a postdoctoral fellow at Columbia and Princeton Universities. He was named top 35 innovator under 35 by MIT Technology Review in 2013 and was also awarded the Google Faculty Award, the Okawa Foundation Research Grant, as well as the Andrew and Erna Viterbi Early Career Chair. He won the Office of Naval Research (ONR) Young Investigator Award in 2018 and was named named to the DARPA ISAT Study Group in 2019. In 2020, he won the ACM SIGGRAPH Real-Time Live! “Best in Show” award. Hao was speaker at the World Economic Forum in Davos in 2020 and exhibited at SXSW in 2022. His startup, Pinscreen, was recipient of the Epic Megagrants in 2021, and in 2022, Hao was featured in the first season of Amazon's documentary re:MARS Luminaries. Hao obtained his PhD at ETH Zurich and his MSc at the University of Karlsruhe (TH).

Abstract

As the world is getting ready for the metaverse, the need for 3D content is growing rapidly, AR/VR will become mainstream, and next era of the web will be spatial. A digital and immersive future is unthinkable without telepresence, lifelike digital humans, and photorealistic virtual worlds. Existing computer graphics pipelines and technologies rely on production studios and a content creation process that is time consuming and expensive. My research is about developing novel 3D deep learning-based techniques for generating photorealistic digital humans, objects, and scenes and democratizing the process by making such capability accessible to anyone and automatic. In this talk, I will present a state-of-the-art technology for digitizing an entire virtual 3D avatar from a single photo developed at Pinscreen, and give a live demo. I will also showcase a high-end neural rendering technology used in next generation virtual assistant solutions and real-time virtual production pipelines. I will also present a real-time teleportation system that only uses a single webcam as input for digitizing entire bodies using 3D deep learning. Furthermore, I will present our work with UC Berkeley on real-time AI synthesis of entire scenes using NeRF representations and Plenoctrees. Finally, I will showcase our latest work in AI-VFX where we developed a neural rendering pipeline for facial reenactment and visual dubbing. In particular, we were the first to complete an entire feature film that is lip sync’ed from German/Polish to English. My goal is to enable new capabilities and applications at the intersection of AI, vision, and graphics and impact the future of communication, human-machine interaction, and content creation. At the same time, we must also prioritize the safety and wellbeing of everyone while architecting this future.

Upcoming Speakers

Khanh Nguyen

Princeton University

Enriching Communication between Humans and AI Agents

Friday, Oct 07 2022 - 10:00 am (GMT + 7)