Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)

Perhaps too general a question, but can anyone explain what would cause a Convolutional Neural Network to diverge?

Specifics:

I am using Tensorflow's iris_training model with some of my own data and keep getting

ERROR:tensorflow:Model diverged with loss = NaN.

Traceback...

tensorflow.contrib.learn.python.learn.monitors.NanLossDuringTrainingError: NaN loss during training.

Traceback originated with the line:

tf.contrib.learn.DNNClassifier(feature_columns=feature_columns, hidden_units=[300, 300, 300],

#optimizer=tf.train.ProximalAdagradOptimizer(learning_rate=0.001, l1_regularization_strength=0.00001), n_classes=11, model_dir="/tmp/iris_model")

I've tried adjusting the optimizer, using a zero for learning rate, and using no optimizer. Any insights into network layers, data size, etc is appreciated.

1 Answer

0 votes
by (33.1k points)

There are lots of things that can make a model diverge.

For example:

  1. Too high learning rate. You can notice this if the loss begins to increase and then diverges to infinity.

  2. The DNNClassifier uses the categorical cross entropy cost function. This function takes the log of the prediction which diverges as the prediction approaches zero. You can add a small epsilon value to the prediction to prevent this divergence. I am guessing the DNNClassifier probably does this or uses the tensorflow for it. Numerical stability issues can exist such as division by zero where adding the epsilon can help. The square root who's derivative can diverge if not properly normalized when dealing with finite precision numbers. 

  3. If you may have an issue with the input data, then try calling assert not np.any(np.isnan(x)) on the input data to make sure you are not introducing the nan. Also, make sure all of the target values are valid. Make the data normalized. You probably want to have the pixels in the range [-1, 1] and not [0, 255].

  4. The labels must be in the domain of the loss function, so if using a logarithmic-based loss function all labels must be non-negative.

Hope this answer helps.

If you wish to know more about DNN Classifier then visit this Artificial Neural Network Tutorial.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...