Rishabh Singh
October 26, 2018

Abstract: The key to attaining general artificial intelligence is to develop architectures that are capable of learning complex algorithmic behaviors modeled as programs. The ability to learn programs can allow these architectures to learn to compose high-level abstractions that can in turn lead to many benefits: i) enable neural architectures to perform more complex tasks, ii) learn interpretable representations (programs which can be analyzed, debugged, or modified), and iii) better generalization to new inputs (like algorithms). In this talk, I will present some of our recent work in developing neural architectures for learning programs from examples, and also briefly discuss other applications such as program repair and fuzzing that can benefit from such neural program representations.

Bio: Rishabh Singh is a research scientist at Google Brain working on neural program synthesis. His research interests span the areas of programming languages and deep learning. Previously, he spent 4 wonderful years as a Researcher at Microsoft Research. He obtained his PhD in Computer Science from MIT in 2014, where he was a Microsoft Research PhD fellow and was awarded the MIT’s George M. Sprowls Award for Best PhD Dissertation in Computer Science. He obtained his BTech in Computer Science from IIT Kharagpur in 2008, where he was awarded the Institute Silver Medal and Bigyan Sinha Memorial Award