Back

Explore Courses Blog Tutorials Interview Questions
0 votes
3 views
in Machine Learning by (19k points)

I would like to calculate NN model certainty/confidence (see What my deep model doesn't know) - when NN tells me an image represents "8", I would like to know how certain it is. Is my model 99% certain it is "8" or is it 51% it is "8", but it could also be "6"? Some digits are quite ambiguous and I would like to know for which images the model is just "flipping a coin".

I have found some theoretical writings about this but I have trouble putting this in code. If I understand correctly, I should evaluate a testing image multiple times while "killing off" different neurons (using dropout) and then...?

Working on MNIST dataset, I am running the following model:

from keras.models import Sequential

from keras.layers import Dense, Activation, Conv2D, Flatten, Dropout

 

model = Sequential()

model.add(Conv2D(128, kernel_size=(7, 7),

                 activation='relu',

                 input_shape=(28, 28, 1,)))

model.add(Dropout(0.20))

model.add(Conv2D(64, (3, 3), activation='relu'))

model.add(Dropout(0.20))

model.add(Flatten())

model.add(Dense(units=64, activation='relu'))

model.add(Dropout(0.25))

model.add(Dense(units=10, activation='softmax'))

model.summary()

model.compile(loss='categorical_crossentropy',

              optimizer='sgd',

              metrics=['accuracy'])

model.fit(train_data, train_labels,  batch_size=100, epochs=30, validation_data=(test_data, test_labels,))

Question: how should I predict with this model so that I get its certainty about predictions too? I would appreciate some practical examples (preferably in Keras, but any will do).

I am looking for an example of how to get certainty using the method outlined by Yurin Gal (or an explanation why some other method yields better results).

1 Answer

0 votes
by (33.1k points)

You can implement a dropout approach to measure uncertainty. You should also apply dropout during the test time:

For example: 

import keras.backend as K

f = K.function([model.layers[0].input, K.learning_phase()],

               [model.layers[-1].output])

You can use this function as uncertainty predictor

def predict_with_uncertainty(f, x, n_iter=10):

    result = numpy.zeros((n_iter,) + x.shape)

    for iter in range(n_iter):

        result[iter] = f(x, 1)

    prediction = result.mean(axis=0)

    uncertainty = result.var(axis=0)

    return prediction, uncertainty

Hope this answer helps.

by (120 points)
Hi. Could you please clarify what are these parameters n_iter, x and f??
by (33.1k points)
Hi Vaishnavi

Here f is an object, which stores our neural network model's architecture. e.g. how many dense layers will be used for training and the name of the function used at output layer.

x is simply the dataframe, which contains training data. e.g. Image or text data.

n_iter (number of iterations) parameter will help iterate data during model training 10 times.
by (120 points)
Thank you for the clarification.
by (120 points)
what is the meaning of1 in f(x,1) can we replace 1 by 0 or -1
asked Sep 13, 2019 in Machine Learning by (120 points) What is the meaning of 1 in f9
by (33.1k points)
Hi Shani

Here we are storing values of 'f(x,1)' in a list 'result'. 'f(x,1)' is a NumPy array. The number '1' is used to store values as a vector. We can use '0' when we want to store the list as a row vector.  So '1' is more preferred here.

Browse Categories

...