Adaptive, Parallel, and Asynchronous Stochastic Optimization - TopicsExpress



          

Adaptive, Parallel, and Asynchronous Stochastic Optimization Algorithms Speaker: John Duchi, University of California, Berkeley Wed, Nov 13, 2013 @ 02:00 PM - 03:00 PM EEB 248 Abstract: In this talk, I will discuss some recent insights in stochastic optimization algorithms, focusing on new adaptive schemes that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based optimization. These ideas allow us to develop learning algorithms that are (in a sense) optimal for the data they actually receive. As a particular example of these schemes, we look at problems where the *data* is sparse, which is in a sense dual to the current understanding of high-dimensional statistical learning and optimization. We also show how these ideas can be leveraged in the design of parallel and asynchronous algorithms, providing experimental evidence to complement our theoretical results on several different learning and optimization tasks. Biography: I am currently a PhD candidate in computer science at Berkeley, where I started in the fall of 2008. I work in the Statistical Artificial Intelligence Lab (SAIL) under the joint supervision of Mike Jordan and Martin Wainwright. I obtained my masters degree (MA) in statistics in Fall 2012. I was initially supported by an NDSEG fellowship, and until recently was supported by Facebook, who generously awarded me a Facebook Fellowship. Before this, I was an undergrad and a masters student at Stanford University working with Daphne Koller in her research group, DAGS. I also spend some time at Google Research (once upon a time I was also a software engineer there), where I had (and continue to have) the great fortune to work with Yoram Singer. Host: Urbashi Mitra, [email protected], EEB 536, x04667
Posted on: Tue, 12 Nov 2013 18:32:56 +0000

Trending Topics



Recently Viewed Topics




© 2015