本論文研究對象為單層與多層前饋式類神經網路(FNN)與遞迴式類神經網路(RNN)在學習效率上的比較,在RNN的架構中,我本以無限脈衝響應濾波器擔任訊號遞迴的角色。其中,我們採用分段式線性啟動函數,且在實現RNN時,我們針對極點與 範數靈敏度同時進行最佳化,每個神經元的權重和偏移量則是使用倒傳遞學習演算法進行更新,最後呈現類神經對超越函數 以及組合函數的學習效果,結果顯示,針對低複雜度函數的學習,單層RNN表現略優於FNN,然而面對較高複雜度的函數所需學習次數單層RNN學習次數可明顯下降,說明在神經元數量較少的情形下RNN能有效降低學習次數。若提升層數時FNN與RNN的學習次數以FNN較低。
In this thesis, the learning efficiency of the single layer and multiple layer feedforward neural networks (FNN), as well as recurrent neural networks (MLRNN) were investigated. In the RNN structure, piecewise linear activation functions were used. In addition, infinite impulse response digital filter played the role of signal recursions. In RNN implementation, pole-L_2 sensitivity minimization was performed. The weight of every neuron was adjusted by using the back-propagation learning algorithm. In this thesis, a simple sinusoidal function and a relatively complicated function were used for comparison. The simulation result shows that single layer FNN and RNN are with similar learning efficiency in learning simple sinusoidal function. Whereas RNN is with higher efficiency in learning complicated functions