香港城市大学练恒教授学术报告

发布日期:2024-03-27    浏览次数:

报告题目On Linear Convergence of ADMM for Decentralized Quantile Regression

报告人   恒教授

报告时间202433110:30-12:30

报告地点数学与统计学院4号楼A408

邀请单位:福州大学数学与统计学院,福建省应用数学中心(福州大学)

报告内容简介 The alternating direction method of multipliers (ADMM) is a natural method of choice for distributed parameter learning. For smooth and strongly convex consensus optimization problems, it has been shown that ADMM and some of its variants enjoy linear convergence in the distributed setting, much like in the traditional non-distributed setting. The optimization problem associated with parameter estimation in quantile regression is neither smooth nor strongly convex (although is convex) and thus it can only have sublinear convergence at best. Although this insinuates slow convergence, we show that, if the local sample size is sufficiently large compared to parameter dimension and network size, distributed estimation in quantile regression actually exhibits linear convergence up to the statistical precision.

报告人简介练恒,现任香港城市大学数学系教授,于2000年在中国科学技术大学获得数学和计算机学士学位,2007年在美国布朗大学获得计算机硕士,经济学硕士和应用数学博士学位。先后在新加坡南洋理工大学,澳大利亚新南威尔士大学,和香港城市大学工作。在高水平国际期刊上发表学术论文30多篇,包括《Annals of Statistics》《Journal of the Royal Statistical SocietySeries B》、《Journal of the American Statistical Association》《Journal of Machine Learning Research》《IEEE Transactions on Pattern Analysis and Machine Intelligence. 研究方向包括高维数据分析,函数数据分析,机器学习等。

欢迎感兴趣的师生参与