Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Data Science by (17.6k points)

I am using the "tensorflow" keras. ie. i did:

from tensorflow import keras

from tensorflow.keras import layers

Not sure if this is different from Keras with TF as backend. I am on TF 1.14.0 and running this on google colab.

The problem is that each time I re-created a model (or recompile), _N will be appended to the metrics. You can see this in the printout during training, and also as keys to history.history.

Epoch 1/100

32206/32206 [==============================] - 4s 138us/sample - loss: 0.8918 - precision_4: 0.6396 - recall_4: 0.4613 - val_loss: 5.5533 - val_precision_4: 0.0323 - val_recall_4: 0.0492

Epoch 2/100

I am not sure if this is important for Keras to work properly but these names are an inconvenience when i tried to access them in the history. I could write more code to parse them but I like to know if i can just enforce the names in the first place. Usually, when i re-instantiated the model (or recreate from functional API), i don't intend to keep around the old version (i just overwrite it into "model"). So I am just not sure if those "_N" stuff has any importance than just the names. Would Keras somehow make use of them internally? such that I may just be better off living with those names and just parse them out properly when I need to access them later.

1 Answer

0 votes
by (41.4k points)

When you are specifying your metrics, use this:

keras.metrics.Precision(name='precision')

keras.metrics.Recall(name='recall')

This will stick to the name you give it, in training print out and  in history.histor

If you wish to know TensorFlow visit this TensorFlow Tutorial.

Browse Categories

...