Infinitely divisible(无穷可分)分享 http://blog.sciencenet.cn/u/a3141592653589 概率与数理统计,随机过程,金融数学,精算,大数据,机器学习,高维统计,金融统计,数学建模,学术资讯,书单

博文

北大统计科学中心2018年统计机器学习暑期课程 金加顺等专家授课

已有 6440 次阅读 2018-6-18 12:57 |系统分类:博客资讯

 北大统计科学中心2018年暑期课程报名 from PKU统计科学中心


       不管是传统的统计分析,还是现在火热的机器学习和深度学习,无数的数据分析师、数据工程师和数据科学家们其实都在追求一个能够完美利用数据解决现实各种问题的模型或者方法。

      北大统计科学中心将在6月29日-7月1日举办暑期短课,届时,来自海内外知名学府的统计学与机器学习的专家Jiashun Jin、Cheng Yong Tang、Yuan Yao、Zheng Tracy Ke将开设短期课程,介绍高维统计,机器学习、深度学习等领域的一些最新进展。欢迎广大师生和校外在职专业人士来听课!

本课程不收取任何报名费,参加者请食宿自理。


课程报名链接(用微信打开)https://mp.weixin.qq.com/s/-Oaeruyuz4QwZKVbPs1KDg


讲师介绍



JIASHUN JIN

Professor

Carnegie Mellon University

http://www.stat.cmu.edu/~jiashun/

Jiashun Jin is Professor in Statistics and Affiliated Professor in Machine Learning at Carnegie Mellon University. His expertise is in statistical inference for Rare and Weak signals in Big Data, concerning the regime where the signals of interest are so rare and weak that many conventional approaches fail, and it is desirable to find new methods and theory that are appropriate for such a situation. His earlier work was on large-scale multiple testing, focusing on the development of (Tukey's) Higher Criticism and practical False Discovery Rate (FDR) controlling methods. His more recent interest is on social network analysis and text mining. Jin received NSF CAREER award in 2007, IMS Tweedie Award in 2009, and he was elected an IMS Fellow in 2011. He has also delivered the highly selective IMS Medallion Lecture in 2015 and IMS AoAS (Annals of Applied Statistics) Lecture in 2016, and other plenary or keynote talks. Jin has co-authored two Editor's Invited Review papers and two Editor's Invited Discussion papers. He has also gained valuable experience in financial industry by doing research for two years at Two-Sigma Investment from 2016 to 2017.


Title: Higher Criticism for Large Scale Inference, especially for Rare and Weak effects

In modern high-throughput data analysis, researchers perform a large number of statistical tests, expecting to find perhaps a small fraction of significant effects against a predominantly null background. Higher Criticism (HC) was introduced to determine whether there are any non-zero effects; more recently, it was applied to feature selection in the context of cancer classification and cancer clustering, where it provides a method for selecting useful predictive features from a large body of potentially useful features, among which only a rare few will prove truly useful.

We discuss HC in three settings: global testing, cancer classification, and cancer clustering. HC is a flexible idea, which adapts easily to new situations. HC, although still early in its development, is seeing increasing interest from practitioners; we illustrate this with worked examples. HC is computationally effective, which gives it a nice leverage in the increasingly more relevant 'Big Data' settings we see today.

We also review the underlying theoretical 'ideology' behind HC. The Rare/Weak(RW) model is a theoretical framework simultaneously controlling the size and prevalence of useful/significant items among the useless/null bulk. The RW model shows that HC has important advantages over better known procedures such as False Discovery Rate (FDR) control and Family-wise Error control (FwER), in particular, certain optimality properties. We discuss the rare/weak phase diagram, a way to visualize clearly the class of RW settings where the true signals are so rare or so weak that detection and feature selection are simply impossible, and a way to understand the known optimality properties of HC.



Cheng Yong Tang

Associate Professor

Temple University

https://sites.temple.edu/yongtang/

Dr. Cheng Yong Tang is Associate Professor in the Department of Statistical Science of Temple University. He is the Director of the Graduate Programs in Statistics of Temple University. Dr. Tang received his PhD in Statistics from the Iowa State University in 2008. His research interests

include longitudinal data analysis, high-dimensional data analysis, nonparametric statistical methods, empirical likelihood, financial data analysis, survey data and missing data analysis. Dr. Tang has published more than twenty research articles. He is an Elected Member of the International Statistical Institute, and a Fellow of the Royal Statistical Society.


Title: Parsimonious statistical modeling approaches for longitudinal studies

Longitudinal data broadly refer to those with repeated measurement from the same subject. The key objective in their modeling is to incorporate the within subject dependence. Many perspectives of modeling longitudinal data shed light on broad areas of covariance estimations with large and complex data sets.

The plan of the lectures is to cover new topics on longitudinal data analysis and covariance modeling, focusing on current topics of more complex covariance structures and high data dimensionality. We will start with overview of conventional approaches for longitudinal data modeling. Then we will discuss new joint mean-variance-correlation regression approaches for modeling continuous and discrete repeated measurements from longitudinal studies. A new device will be introduced by applying hyperspherical coordinates, and obtaining an unconstrained interpretable parametrization of the correlation matrix.  Based on the new device, we consider regression approaches to model the correlation matrix of the longitudinal measurements by exploiting the unconstrained parametrization. The modeling framework is parsimonious, interpretable, and flexible. Further topics on discrete longitudinal data analysis, nonparametric and semiparametric extensions will also be introduced.



Yuan Yao

Associate Professor

Hong Kong University of

Science and Technology

https://yao-lab.github.io

Yuan Yao received the B.S.E and M.S.E in control engineering both from Harbin Institute of Technology, China, in 1996 and 1998, respectively, M.Phil in mathematics from City University of Hong Kong in 2002, and Ph.D. in mathematics from the University of California, Berkeley, in 2006. Since then he has been with Stanford University and in 2009, he joined as a Fellow of 100-Talent Program the Department of Probability and Statistics in School of Mathematical Sciences, Peking University, Beijing, China. He is currently an Associate Professor of Mathematics, Chemical & Biological Engineering, and by courtesy, Computer Science & Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong SAR, China. His current research interests include machine learning and high dimensional data analysis, in particular topological and geometric methods, with applications in computational biology, computer vision, and information retrieval, etc.

Title: Differential Inclusion Method in high dimensional statistics

and Deep Learning toward deeper understanding

1)Title: Differential Inclusion Method in High Dimensional Statistics

Boosting, as gradient descent method, is arguably the `best off-the-shelf' methods in machine learning. Here a novel Boosting-type algorithm is proposed based on restricted gradient descent whose underlying dynamics are governed by differential inclusions. In particular, we present an iterative regularization path with structural sparsity where the parameter is sparse under some linear transforms, based on the Linearized Bregman Iteration or sparse mirror descent. Despite its simplicity, it outperforms the popular (generalised) Lasso in both theory and experiments. A theory of path consistency is presented that equipped with a proper early stopping, it may achieve model selection consistency under a family of Irrepresentable Conditions which can be weaker than the necessary and sufficient condition for generalized Lasso. The utility and benefit of the algorithm are illustrated by applications on sparse variable selection, learning graphical models, partial order ranking, and Alzheimer's disease detection via neuroimaging.

2) Title: Deep Learning toward Deeper Understanding

Deep learning has recently undergone a tremendous success in a variety of applications, such as speech recognition, computer vision, natural language processing, and games against human players. However there are still lots of puzzles in understanding its empirical success. Interesting questions include but are not limited to: A. what kind of geometric properties and transformational invariants holds for deep networks architectures that avoid the curse of dimensionality; B. how deep learning can generalise well without suffering the overfitting even in over-parameterized models; C. what are the landscapes of empirical risks or objective functions that deep learning may efficiently optimise; D. what are the alternative effective optimization methods rather than the stochastic gradient descent. This talk presents some state-of-the-art results around these explorations toward a deeper understanding of deep learning.



Zheng Tracy Ke

Assistant Professor

University of Chicago

http://www.stat.uchicago.edu/~zke/

Tracy Ke obtained her Ph.D in Statistics from Princeton University in 2014. She is currently Assistant Professor in Statistics at University of Chicago. Her slightly earlier work is in high dimensional variable selection, focusing on the most challenging regime where the signal of interest is both rare and weak, so many conventional approaches do not work well. She has developed a class of procedures including Covariance-Assisted Screening and Estimation (CASE) and Covariance-Assisted Ranking (CAR) to address such a situation. She has also co-authored an Editor's Invited Review paper on this topic. Her most recent work is on the analysis of social network, where she has developed a procedure called Mixed-SCORE, and extended the idea to several seemingly unrelated settings, including topic modeling in text mining, genetic network analysis, and hyper-graph analysis, where minimax optimality is often carefully justified.

Title: New Tools for Analyzing Complicated and High-dimensional Data

In the first two classes, we tackle two seemingly unrelated problems: membership estimation in social networks, and topic modeling in text mining. PCA is a powerful tool, but it does not work well in many modern applications without careful adaptions. We propose two new PCA approaches called Mixed-SCORE and Topic-SCORE for membership estimation and topic modeling respectively, at the heart of which is a post-PCA normalization and a surprising low-dimensional simplex structure. We explain how the simplex structure motivates Mixed-SCORE and Topic-SCORE, and we support our approaches with several real data examples, as well as carefully justified minimax optimality.

In the next two classes, we consider the problem of high dimensional variable selection. In the most challenging regime where the signals of interest are both rare and weak, well-known approaches of L0andL1-penalization methods are not optimal, if we use Hamming selection errors as the measure of success. We propose a new approach called Covariance-Assisting Screening and Estimation (CASE) and show that it achieves the optimal phase diagram. We also consider an extension of CASE called Covariance-Assisted Ranking (CAR), which is showed to be very helpful in variable ranking for the Rare/Weak settings.




https://blog.sciencenet.cn/blog-752541-1119543.html

上一篇:2018年新科数理统计学会会士(IMS Fellow)名单:包括2位香港教授
下一篇:2019年美国大学生数学建模竞赛和跨学科建模竞赛(MCM/ICM)报名通知
收藏 IP: 124.205.76.*| 热度|

0

该博文允许注册用户评论 请点击登录 评论 (1 个评论)

数据加载中...
扫一扫,分享此博文

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-3-30 00:03

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部