validation - Training and Validating Correctly With Encog -
validation - Training and Validating Correctly With Encog -
i think i'm doing wrong encog. in of examples i've seen, train until training error reached , print results. when gradient calculated , weights of hidden layers updated? contained within training.iteration() function? makes no sense because though training error keeps decreasing in program, seems imply weights changing, have not yet run validation set through network (which broke off , separated training set when building info @ beginning) in order determine if validation error still decreasing training error.
i have loaded validation set trainer , ran through network compute() validation error similar training error - it's hard tell if same error training. meanwhile, testing nail rate less 50% (expected if not learning).
i know there lot of different types of backpropogation techniques, particularly mutual 1 using gradient descent resilient backpropogation. part of network expected update manually ourselves?
in encog, weights updated during train.iteration method call. includes weights. if using gradient descent type trainer (i.e. backprop, rprop, quickprop) neural network updated @ end of each iteration call. if using population based trainer (i.e. genetic algorithm, etc) must phone call finishtraining best population fellow member can copied actual neural network passed trainer's constructor. actually, thought phone call finishtraining after iterations. trainers need it, others not.
another thing maintain in mind trainers study current error @ origin of phone call iteration, others @ of iteration(improved error). efficiency maintain of trainers having iterate on info twice.
keeping validation set test training idea. few methods might helpful you:
basicnetwork.dumpweights - displays weights neural network. allows see if have changed. basicnetwork.calculateerror - pass training set , give error.
validation machine-learning neural-network encog
Comments
Post a Comment