NOTICE: The 2017 ICML conference may close registration ahead of the declared deadline in case it reaches the capacity limits of the convention center. There may be no onsite registration. Please register early.

Peter Donnelly

  

Peter Donnally

Donnelly is Director of the Wellcome Trust Centre for Human Genetics and Professor of Statistical Science at the University of Oxford. He grew up in Australia and on graduating from the University of Queensland he studied for a doctorate in Oxford as a Rhodes Scholar. He held professorships at the Universities of London and Chicago before returning to Oxford in 1996. Peter’s early research work concerned the development of stochastic models in population genetics, including the coalescent, and then the development of statistical methods for genetic and genomic data. His group developed several widely-used statistical algorithms, including STRUCTURE and PHASE, and, in collaboration with colleagues in Oxford, IMPUTE. His current research focuses on understanding the genetic basis of human diseases, human demographic history, and the mechanisms involved in meiosis and recombination.

Peter played a major role in the HapMap project, and more recently, he chaired the Wellcome Trust Case Control Consortium (WTCCC) and its successor, WTCCC2, a large international collaboration studying the genetic basis of more than 20 common human diseases and conditions in over 60,000 people. He also led WGS500, an Oxford collaboration with Illumina to sequence 500 individuals with a range of clinical conditions to assess the short-term potential for whole genome sequencing in clinical medicine; a precursor to the NHS 100,000 Genomes Project. Peter is a Fellow of the Royal Society and of the Academy of Medical Sciences, and is an Honorary Fellow of the Institute of Actuaries. He has received numerous awards and honours for his research.

 


Latanya Sweeney

Harvard University

Latanya SweeneyAs Professor of Government and Technology in Residence at Harvard University, my mission is create and use technology to assess and solve societal, political and governance problems, and to teach others how to do the same. On focus area is the scientific study of technology's impact on humankind, and I am the Editor-in-Chief of Technology Science. Another focus area is data privacy, and I am the Director of the Data Privacy Lab at Harvard. There are other foci too. (more)

I was formerly the Chief Technology Officer, also called the Chief Technologist, at the U.S. Federal Trade Commission (FTC). It was a fantastic experience! I thank Chairwoman Ramirez for appointing me. One of my goals was to make it easier for others to work on innovative solutions at the intersection of technology, policy and business. Often, I thought of my past students, who primarily came from computer science or governance backgrounds, and who were highly motivated to change the world. I would like to see society harness their energy and get others thinking about innovative solutions to pressing problems. During my time there, I launched the summer research fellows program and blogged on Tech@FTC to facilitate explorations and ignite brainstorming on FTC-related topics.


Towards Reinforcement Learning in the Real World

Raia Hadsell (DeepMind)

Raia HadsellRaia Hadsell, a senior research scientist at DeepMind, has worked on deep learning and robotics problems for over 10 years. Her early research developed the notion of manifold learning using Siamese networks, which has been used extensively for invariant feature learning. After completing a PhD with Yann LeCun, which featured a self-supervised deep learning vision system for a mobile robot, her research continued at Carnegie Mellon’s Robotics Institute and SRI International, and in early 2014 she joined DeepMind in London to study artificial general intelligence. Her current research focuses on the challenge of continual learning for AI agents and robotic systems. While deep RL algorithms are capable of attaining superhuman performance on single tasks, they cannot transfer that performance to additional tasks, especially if experienced sequentially. She has proposed neural approaches such as policy distillation, progressive nets, and elastic weight consolidation to solve the problem of catastrophic forgetting and improve transfer learning. 

Abstract: Deep reinforcement learning has rapidly grown as a research field with far-reaching potential for artificial intelligence. Large set of ATARI games have been used as the main benchmark domain for many fundamental developments. As the field matures, it is important to develop more sophisticated learning systems with the aim of solving more complex tasks. I will describe some recent research from DeepMind that allows end-to-end learning in challenging environments with real-world variability and complex task structure.

Causal Learning

Bernhard Schölkopf (Max Planck Institute for Intelligent Systems)

Peter DonnallyBernhard Schölkopf's scientific interests are in machine learning and causal inference. He has applied his methods to a number of different application areas, ranging from biomedical problems to computational photography and astronomy. Bernhard has researched at AT&T Bell Labs, at GMD FIRST, Berlin, and at Microsoft Research Cambridge, UK, before becoming a Max Planck director in 2001. He is a member of the German Academy of Sciences (Leopoldina), and has received the J.K. Aggarwal Prize of the International Association for Pattern Recognition, the Max Planck Research Award (shared with S. Thrun), the Academy Prize of the Berlin-Brandenburg Academy of Sciences and Humanities, and the Royal Society Milner Award.