2 views

In the MNIST beginner tutorial, there is the statement

accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

tf.cast basically changes the type of tensor the object is, but what is the difference between tf.reduce_mean and np.mean?

Here is the doc on tf.reduce_mean:

reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)

input_tensor: The tensor to reduce. Should have numeric type.

reduction_indices: The dimensions to reduce. If None (the default), reduces all dimensions.

# 'x' is [[1., 1. ]]

#         [2., 2.]]

tf.reduce_mean(x) ==> 1.5

tf.reduce_mean(x, 0) ==> [1.5, 1.5]

tf.reduce_mean(x, 1) ==> [1.,  2.]

For a 1D vector, it looks like np.mean == tf.reduce_mean, but I don't understand what's happening in tf.reduce_mean(x, 1) ==> [1.,  2.]. tf.reduce_mean(x, 0) ==> [1.5, 1.5] kind of makes sense, since mean of [1,2] and [1,2] are [1.5,1.5] but what's going on with tf.reduce_mean(x,1)?

closed

+1 vote
by (33.1k points)
selected

The function of numpy.mean and tensorflow.reduce_mean are the same.

Numpy example:

c = np.array([[3.,4], [5.,6], [6.,7]])

print(np.mean(c,1))

Output

[ 3.5  5.5 6.5]

[ 3.5  5.5 6.5]

TensorFlow Example:

Mean = tf.reduce_mean(c,1)

with tf.Session() as sess:

result = sess.run(Mean)

print(result)

Output

[ 3.5  5.5 6.5]

[ 3.5  5.5 6.5]

In the above example, axis(numpy) or reduction_indices(tensorflow) is 1, it computes mean across (3,4) and (5,6) and (6,7), so 1 defines across which axis the mean is computed. When it is 0, the mean is computed across(3,5,6) and (4,6,7), and so on

Numpy operation can be computed anywhere in the code of python. But for a tensorflow operation, it must be done inside a TensorFlow Session. So when you need to perform any computation for your TensorFlow graph, it must be done inside a tensorflow Session.

For example.

NumPy:

npMean = np.mean(c)

print(npMean+1)

TensorFlow:

tfMean = tf.reduce_mean(c)

with tf.Session() as sess:

print(result)