引用本文:
【打印本页】   【下载PDF全文】   查看/发表评论  【EndNote】   【RefMan】   【BibTex】
←前一篇|后一篇→ 过刊浏览    高级检索
本文已被:浏览次   下载 本文二维码信息
码上扫一扫!
分享到: 微信 更多
Convergence analysis of inverse iterative algorithms for neural networks with L1/2 penalty
HUANG Bingjia1,WANG Jian1,2, WEN Yanqing1, YANG Xifeng1, SHAO Hongmei1, WANG Jing2
(1.College of Science in China University of Petroleum,Qingdao 266580,China) (;2.Electronic Information and Electrical Engineering in Dalian University of Technology,Dalian 116024)
Abstract:
Compared with the common existing L1/2 penalty term for trained neural networks, the algorithm of neural networks with L1/2 penalty shows more sparse performance and prunes neurons more effectively. However, the L1/2 penalty is non-convex, non-smooth and non-Lipschitz continuous, which inevitably leads to numerical oscillations and problems in theoretical convergence analysis. It is a better solution by using smooth function to approach the penalty. For the proposed algorithm, the error function decreases monotonously with fixed trained weights. In addition, the weak and strong convergence were proved. The presented algorithm performs more stable and sparse than the existing inverse iterative neural networks, and is applicable to more general cases.
Key words:  neural networks  gradient descent  inverse iterative algorithm  monotonicity  regularization  convergence