We are delighted to have with us four distinguished plenary speakers:
Ryan Adams, Associate Professor, School of Engineering and Applied Sciences, Harvard University. Designing New Materials with Machine Learning Abstract. Experimental exploration of new materials is extremely expensive and time consuming, and even computational simulation can take days or weeks. In this talk I will describe my group’s ongoing collaboration with Alan Aspuru-Guzik in the Chemistry Department at Harvard to use machine learning techniques to accelerate the design of novel organic materials. This work uses ML at a variety of levels, from predicting quantum properties without, e.g., density functional theory computation, to the use of Bayesian optimization and experimental design techniques to make choices about new materials to explore. I will describe some of these techniques and show some recent results from the project. Biography. Ryan Adams is a Research Scientist at Google Brain and the leader of the Harvard Intelligent Probabilistic Systems group at Harvard University. He received his Ph.D. in Physics at Cambridge as a Gates Scholar. He was a CIFAR Junior Research Fellow at the University of Toronto before joining the faculty at Harvard. He has won paper awards at ICML, AISTATS, and UAI, and his Ph.D. thesis received Honorable Mention for the Savage Award for Theory and Methods from the International Society for Bayesian Analysis. He also received the DARPA Young Faculty Award and the Sloan Fellowship. Ryan was a co-founder of Whetlab, a machine learning startup acquired by Twitter in 2015, and co-hosts the popular Talking Machines podcast. |
|
David Aldous, Professor, Department of Statistics University of California, Berkeley. Probability and the Real World Abstract. Textbooks on Probability focus on going from a mathematical model to conclusions within the model, but do not deal with realism of models or real-world significance of conclusions. This talk gives musings and rants on the latter issues, both broadly (Life, the Universe and Everything) and in the context of networks. Biography. David Aldous is Professor in the Statistics Dept at U.C. Berkeley, since 1979. He received his Ph.D. from Cambridge University in 1977. He is the author of "Probability Approximations via the Poisson Clumping Heuristic" and (with Jim Fill) of a notorious unfinished online work "Reversible Markov Chains and Random Walks on Graphs". His research in mathematical probability has covered weak convergence, exchangeability, Markov chain mixing times, continuum random trees, stochastic coalescence and spatial random networks. A central theme has been the study of large finite random structures, obtaining asymptotic behavior as the size tends to infinity via consideration of some suitable infinite random structure. He has recently become interested in articulating critically what mathematical probability says about the real world. He is a Fellow of the Royal Society, and a foreign associate of the National Academy of Sciences. |
|
Maryam Fazel, Associate Professor of Electrical Engineering University of Washington. Online optimization, smoothing, and worst-case competitive ratio Abstract. In Online Optimization, the data in an optimization problem is revealed over time, and at each step a decision variable needs to be set without knowing the future data. This setup covers online resource allocation, from classical inventory problems to the `Adwords' problem popular in online advertising. We discuss several algorithms for a class of online convex optimization problems. While connections to the regret framework (in online learning) are mentioned, the focus of the talk is on the algorithms' competitive ratio, i.e., the ratio of the objective achieved by the algorithm to that of the optimal offline sequence of decisions. We discuss bounds on this ratio for a primal-dual greedy algorithm, show how smoothing the objective can improve this bound, and for separable functions, how to seek the optimal smoothing by solving a convex optimization problem. This approach allows us to design effective smoothing, customized for a given cost function. Biography. Maryam Fazel is an Associate Professor of Electrical Engineering at the University of Washington, with adjunct appointments in the departments of Computer Science and Engineering, Mathematics, and Statistics. Maryam received her MS and PhD in EE from Stanford University, her BS from Sharif University of Technology in Iran, and was a postdoctoral scholar at Caltech prior to joining UW. Her research interests are in mathematical optimization, including convex optimization theory and algorithms, low-rank matrix methods, and applications in machine learning, signal processing, and control. She is a recipient of the NSF Career Award, the UWEE Outstanding Teaching Award, UAI conference Best Student Paper Award with her student, and coauthored a paper on low-rank matrix recovery selected as a Fast-Breaking paper by Science Watch (2011). |
|
Mor Harchol-Balter, Professor of Computer Science Carnegie Mellon University Reducing Latency via Redundant Requests Abstract. Recent computer systems research has proposed using redundant requests to reduce latency. The idea is to replicate a request so that it joins the queue at multiple servers, where the request is considered complete as soon as any one copy of the request completes. Redundancy is beneficial because it allows us to overcome server-side variability -- the fact that the server we choose might be temporarily slow, due to factors like background load, network interrupts, garbage collection, and so on. When server-side variability dominates runtime, replicating requests can greatly reduce their response times. In the past few years, queueing theorists have begun to study redundancy, first via approximations, and, more recently, via exact product-form analysis. Unfortunately, for analytical tractability, all the theoretical analysis has assumed models where a job's replicas each have independent service requirements, unrelated to the job's inherent size. These unrealistic models have resulted in analysis which differs greatly from computer systems implementation results. In this talk, we introduce a much more realistic model of redundancy. Our model allows us to decouple the inherent job size (X) from the server-side slowdown (S), where we track both S and X for each job. Analysis within the S&X model is, of course, much more difficult. Nevertheless, we derive a replication policy which is both analytically tractable within the S&X model and has provably excellent performance. Joint work with: Kristy Gardner and Alan Scheller-Wolf. Biography. Mor Harchol-Balter is a Professor of Computer Science at Carnegie Mellon. She received her Ph.D. from U.C. Berkeley in Computer Science in 1996. She joined CMU in 1999. From 2008-2011, she served as the Associate Department Head for Computer Science. Mor is a recipient of the McCandless Junior Chair, the NSF CAREER award, and several teaching awards, including the Herbert A. Simon Award. Mor's work focuses on designing new resource allocation policies, including load balancing policies, power management policies, and scheduling policies, for distributed systems. She is known for both her new techniques in queueing-theoretic analysis and Markov chains, as well as her work in computer systems implementation. Mor is heavily involved in the SIGMETRICS / PERFORMANCE research community, where she has received multiple best paper awards, and where she served as Technical Program Chair in 2007, as General Chair in 2013, and as the Keynote Speaker for 2016. She is also the author of a popular textbook, Performance Analysis and Design of Computer Systems, published by Cambridge University Press. Mor is best known for her enthusiastic talks and her many PhD students, most of whom are professors in top academic institutions. |