引用本文:
【打印本页】   【下载PDF全文】   查看/发表评论  【EndNote】   【RefMan】   【BibTex】
←前一篇|后一篇→ 过刊浏览    高级检索
本文已被:浏览次   下载 本文二维码信息
码上扫一扫!
分享到: 微信 更多
A class of gradient algorithms with variable learning rates and convergence analysis for feedforward neural networks training
SHAO Hong mei1, AN Feng xian2
(1.School of Mathematics and Computational Science in China University of Petroleum, Dongying 257061, China;2.Department of Computer Science, Huaiyin Institute of Technology, Huaian 223003, China)
Abstract:
A general updating rule for learning rates was presented and the convergence of the corresponding batch gradient algorithms with variable learning rates for training feedforward neural networks was proved. The monotonicity of the error function in the training iteration was also proved.
Key words:  feedforward neural networks  convergence  variable learning rate  gradient algorithm