DISTINGUISHED LECTURE SERIES

March 28, 2015
THE FIELDS INSTITUTE FOR RESEARCH IN MATHEMATICAL SCIENCES




April 9 (3:30 p.m.) & April 10 (11:00 a.m.), 2015
Distinguished Lecture Series in Statistical Science

Room 230, Fields Institute


Terry Speed
Walter & Eliza Hall Institute of Medical Research, Melbourne

 


Terry Speed is currently a Senior Principal Research Scientist at the Walter and Eliza Hall Institute of Medical Research. His lab has a particular focus on molecular data collected by cancer researchers, but also works with scientists studying immune and infectious diseases, and those who do research in basic biomedical science. His research interests are broad, but include the statistical and bioinformatic analysis of microarray, DNA sequence and mass spectrometry data from genetics, genomics, proteomics, and metabolomics. The lab works with molecular data at several different levels, from the lowest level where the data come directly from the instruments that generate it, up to the tasks of data integration, and of relating molecular to clinical data. Speed has served on the editorial board of many publications including the Journal of Computational Biology, JASA, Bernoulli and the Australian and New Zealand Journal of Statistics, and has been recognized by the Australian Prime Minister's Prize for Science and Eureka Prize for Scientific Leadership among other awards.

General lecture: Epigenetics: A New Frontier

Abstract: Scientists have now mapped the human genome - the next frontier is understanding human epigenomes; the 'instructions' which tell the DNA whether to make skin cells or blood cells or other body parts. Apart from a few exceptions, the DNA sequence of an organism is the same whatever cell is considered. So why are the blood, nerve, skin and muscle cells so different and what mechanism is employed to create this difference? The answer lies in epigenetics. If we compare the genome sequence to text, the epigenome is the punctuation and shows how the DNA should be read. Advances in DNA sequencing in the last 5-8 years have allowed large amounts of DNA sequence data to be compiled. For every single reference human genome, there will be literally hundreds of reference epigenomes, and their analysis will occupy biologists, bioinformaticians and biostatisticians for some time to come. In this talk I will introduce the topic and the data, and outline some of the challenges.

Specialized lecture: Normalization of omic data after 2007
(joint with Johann Gagnon-Bartsch and Laurent Jacob)

Abstract: For over a decade now, normalization of transcriptomic, genomic and more recently metabolomic and proteomic data has been something you do to "raw" data to remove biases, technical artifacts and other systematic non-biological features. These features could be due to sample preparation and storage, reagents, equipment, people and so on. It was a "one-off" fix to what I'm going to call removing unwanted variation. Since around 2007, a more nuanced approach has been available, due to JT Leek and J Storey (SVA) and O Stegle et al (PEER). These new approaches do two things differently. The first is that they do not assume the sources of unwanted variation are known in advance, they are inferred from the data. And secondly, they deal with the unwanted variation in a model-based way, not "up front." That is, they do it in a problem-specific manner, where different inference problems warrant different model-based solutions. For example, the solution for removing unwanted variation in estimation not necessarily being the same as doing for prediction. Over the last few years, I have been working with Johann Gagnon-Bartsch and Laurent Jacob on these same problems through making use of positive and negative controls, a strategy which we think has some advantages. In this talk I'll review the area, and highlight some of the advantages of working with controls. Illustrations will be from microarray, mass spec and RNA-seq data.

 




April 23 (3:30 p.m.) & 24 (11:00 a.m.), 2015
Distinguished Lecture Series in Statistical Science

Room 230, Fields Institute


Bin Yu
University of California, Berkeley

Bin Yu is Chancellor’s Professor in the Departments of Statistics and of Electrical Engineering & Computer Science at the University of California at Berkeley. Her current research interests focus on statistics and machine learning theory, methodologies, and algorithims for solving highdimensional data problems. Her group is engaged in interdisciplinary research with scientists from genomics, neuroscience, and remote sensing.

She is Member of the U.S. National Academy of Sciences and Fellow of the American Academy of Arts and Sciences. She was a Guggenheim Fellow in 2006, an Invited Speaker at ICIAM in 2011, and the Tukey Memorial Lecturer of the Bernoulli Society in 2012. She was President of IMS (Institute of Mathematical Statistics) in 2013-2014.

 


Distinguished Lecture Series in Statistical Science Index