Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)

I think it would be immensely helpful to the Tensorflow community if there was a well-documented solution to the crucial task of testing a single new image against the model created by the convnet in the CIFAR-10 tutorial.

 

I may be wrong, but this critical step that makes the trained model usable in practice seems to be lacking. There is a "missing link" in that tutorial—a script that would directly load a single image (as array or binary), compare it against the trained model and return a classification.

 

Prior answers give partial solutions that explain the overall approach, but none of which I've been able to implement successfully. Other bits and pieces can be found here and there, but unfortunately, haven't added up to a working solution. 

The below script is not yet functional, and I'd be happy to hear from you on how this can be improved to provide a solution for single-image classification using the CIFAR-10 TF tutorial trained model.

Assume all variables, file names, etc. are untouched from the original tutorial.

New file: cifar10_eval_single.py

import cv2

import tensorflow as tf

FLAGS = tf.app.flags.FLAGS

tf.app.flags.DEFINE_string('eval_dir', './input/eval',

                           """Directory where to write event logs.""")

tf.app.flags.DEFINE_string('checkpoint_dir', './input/train',

                           """Directory where to read model checkpoints.""")

def get_single_img():

    file_path = './input/data/single/test_image.tif'

    pixels = cv2.imread(file_path, 0)

    return pixels

def eval_single_img():

    # below code adapted from @RyanSepassi, however not functional

    # among other errors, saver throws an error that there are no

    # variables to save

    with tf.Graph().as_default():

        # Get image.

        image = get_single_img()

        # Build a Graph.

        # TODO

        # Create dummy variables.

        x = tf.placeholder(tf.float32)

        w = tf.Variable(tf.zeros([1, 1], dtype=tf.float32))

        b = tf.Variable(tf.ones([1, 1], dtype=tf.float32))

        y_hat = tf.add(b, tf.matmul(x, w))

        saver = tf.train.Saver()

        with tf.Session() as sess:

            sess.run(tf.initialize_all_variables())

            ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)

            if ckpt and ckpt.model_checkpoint_path:

                saver.restore(sess, ckpt.model_checkpoint_path)

                print('Checkpoint found')

            else:

                print('No checkpoint found')

            # Run the model to get predictions

            predictions = sess.run(y_hat, feed_dict={x: image})

            print(predictions)

def main(argv=None):

    if tf.gfile.Exists(FLAGS.eval_dir):

        tf.gfile.DeleteRecursively(FLAGS.eval_dir)

    tf.gfile.MakeDirs(FLAGS.eval_dir)

    eval_single_img()

if __name__ == '__main__':

    tf.app.run()

1 Answer

0 votes
by (33.1k points)

You can simply use the softmax function to solve your problem.

softmax = gn.inference(image)

saver = tf.train.Saver()

ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)

with tf.Session() as sess:

  saver.restore(sess, ckpt.model_checkpoint_path)

  softmaxval = sess.run(softmax)

  print(softmaxval)

Output:

[[  6.73550041e-03   4.44930716e-04 9.92570221e-01   1.00681427e-06

    3.05406687e-08   2.38927707e-04 1.89839399e-12   9.36238484e-06

    1.51646684e-09   3.38977535e-09]]

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...