北京师范大学廖仲威副教授学术报告

发布日期:2024-11-25    浏览次数:

报告题目:Viscosity solutions approach to finite-horizon continuous-time Markov decision process

报告人:廖仲威副教授

报告时间:2024年11月28日 15:00-18:00

报告地点:腾讯会议590-936-140,线下:数学与统计学院302

邀请单位:福州大学数学与统计学院

报告摘要:This talk concerns the optimal control problems for the finite-horizon continuous-time Markov decision processes with delay-dependent control policies. We develop compactification methods in decision processes and show that the existence of optimal policies. Subsequently, through the dynamic programming principle of the delay-dependent control policies, the differential-difference Hamilton-Jacobi-Bellman equation in the setting of discrete space is established. Under certain conditions, we give the comparison principle and further prove that the value function is the unique viscosity solution to this equation. Based on this, we show that among the class of delay-dependent control policies, there is an optimal one which is Markovian.

报告人简介:廖仲威,毕业于北京师范大学,曾先后工作于中山大学和华南师范大学,并于澳大利亚The University of Melbourne和加拿大Toronto Metropolitan University担任访问学者。现为北京师范大学文理学院数学系副教授。研究领域包括:随机过程稳定性;Lévy过程;马氏决策过程与最优化理论;Stein方法;金融数学;经济增长模型;不确定性度量等领域。主持国家自然科学基金,广东省基础与应用基础基金,广东省本科高校教学质量与教学改革工程建设等科研与教学项目。研究工作发表于《SIAM J. Control Optim.》,《J. Optim.Theory Appl.》,《J.Math. Econom.》,《J. Theoret.Probab.》,《Adv.NonlinearStud.》,《Internat. J. Control.》,《J. Appl. Probab.》,《ActaMath.Sin.》,《Stoch.Anal. Appl.》等期刊。

欢迎感兴趣的师生参与讨论!