PureBytes Links
Trading Reference Links
|
At 05.58 17/02/03 -0500, you wrote:
>I think it depends on how weights are adjusted during training.
you're right, and it depends from network topolgy. For example, time lagged
recurrent network use non-linear recurrent adaption of weights, and so
training can get caught in local minima.
In fact, I always train this kind of network multiple times, because it's
not unusal to get caught in local minima: for example, the first training
process may use N1 epochs and the error is E1, while the second train may
use N2>N1 with a better E2<E1.
Post AmiQuote-related messages ONLY to: amiquote@xxxxxxxxxxxxxxx
(Web page: http://groups.yahoo.com/group/amiquote/messages/)
Check group FAQ at: http://groups.yahoo.com/group/amibroker/files/groupfaq.html
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
|