Explore Courses Blog Tutorials Interview Questions
0 votes
1 view
in AI and Deep Learning by (50.2k points)

I need to get the loss history over time to plot it in the graph. Here is my skeleton of code:

optimizer = tf.contrib.opt.ScipyOptimizerInterface(loss, method='L-BFGS-B', 

options={'maxiter': args.max_iterations, 'disp': print_iterations})

optimizer.minimize(sess, loss_callback=append_loss_history)

With append_loss_history definition:

def append_loss_history(**kwargs):

    global step

    if step % 50 == 0:


    step += 1

When I see the verbose output of ScipyOptimizerInterface, the loss decreases over time. But when I print loss_history, the losses are nearly the same over time.

Refer to the doc: "Variables subject to optimization are updated in-place AT THE END OF OPTIMIZATION" Is that the reason for the being unchanged of the loss?

1 Answer

0 votes
by (108k points)

The variables are not modified until the end of the optimization (instead of being fed to calls) and evaluating a "back channel" Tensor gets the un-modified variables. Alternatively, use the fetches argument to optimizer.minimize to piggyback on the calls which have the feeds specified:

import tensorflow as tf

def print_loss(loss_evaled, vector_evaled):

  print(loss_evaled, vector_evaled)

vector = tf.Variable([7., 7.], 'vector')

loss = tf.reduce_sum(tf.square(vector))

optimizer = tf.contrib.opt.ScipyOptimizerInterface(

    loss, method='L-BFGS-B',

    options={'maxiter': 100})

with tf.Session() as session:




                     fetches=[loss, vector])


This prints Tensors with the updated values:

98.0 [ 7.  7.]

79.201 [ 6.29289341  6.29289341]

7.14396e-12 [ -1.88996808e-06  -1.88996808e-06]

[ -1.88996808e-06  -1.88996808e-06]

Browse Categories