Block-Sparse Kernels for Deep Neural Networks with Durk Kingma - TWiML Talk #80

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) - A podcast by Sam Charrington - Luni

Categories:

The show is part of a series that I’m really excited about, in part because I’ve been working to bring them to you for quite a while now. The focus of the series is a sampling of the interesting work being done over at OpenAI, the independent AI research lab founded by Elon Musk, Sam Altman and others. In this show I’m joined by Jonas Schneider, Robotics Technical Team Lead at OpenAI. This episode features Durk Kingma, a Research Scientist at OpenAI. Although Durk is probably best known for his pioneering work on variational autoencoders, he joined me this time to talk through his latest project on block sparse kernels, which OpenAI just published this week. Block sparsity is a property of certain neural network representations, and OpenAI’s work on developing block sparse kernels helps make it more computationally efficient to take advantage of them. In addition to covering block sparse kernels themselves and the background required to understand them, we also discuss why they’re important and walk through some examples of how they can be used. I’m happy to present another fine Nerd Alert show to close out this OpenAI Series, and I know you’ll enjoy it! To find the notes for this show, visit twimlai.com/talk/80 For more info on this series, visit twimlai.com/openai

Visit the podcast's native language site