本文已被:浏览次 下载次 |
码上扫一扫! |
|
|
Convergence analysis of inverse iterative algorithms for neural networks with L1/2 penalty |
HUANG Bingjia1,WANG Jian1,2, WEN Yanqing1, YANG Xifeng1, SHAO Hongmei1, WANG Jing2
|
(1.College of Science in China University of Petroleum,Qingdao 266580,China) (;2.Electronic Information and Electrical Engineering in Dalian University of Technology,Dalian 116024)
|
Abstract: |
Compared with the common existing L1/2 penalty term for trained neural networks, the algorithm of neural networks with L1/2 penalty shows more sparse performance and prunes neurons more effectively. However, the L1/2 penalty is non-convex, non-smooth and non-Lipschitz continuous, which inevitably leads to numerical oscillations and problems in theoretical convergence analysis. It is a better solution by using smooth function to approach the penalty. For the proposed algorithm, the error function decreases monotonously with fixed trained weights. In addition, the weak and strong convergence were proved. The presented algorithm performs more stable and sparse than the existing inverse iterative neural networks, and is applicable to more general cases. |
Key words: neural networks gradient descent inverse iterative algorithm monotonicity regularization convergence |
|
|