0 votes
1 view
in AI and Deep Learning by (20.3k points)

I primarily develop my models in R and I am currently learning TensorFlow. I'm going through a tutorial with the following code

raw_data = [1., 2., 8., -1., 0., 5.5, 6., 13] 

spike = tf.Variable(False)

spike.initializer.run()

for i in range(1, len(raw_data)): 

    if raw_data[i] - raw_data[i-1] > 5:

        Updater = tf.assign(spike, True)

        updater.eval()

    else:

        tf.assign(spike, False).eval()

    print("Spike", spike.eval())

sess.close()

From a layman's perspective, why do I need to initialize and Variabalize in TensorFlow? I know this may be a basic question but it's something not dealt with in R.

1 Answer

0 votes
by (44.7k points)

First of all, although the code is perfectly valid, it's not typical in the real application. More commonly, there's the separation of responsibilities: the model is defined in one or more source files and gets executed in another file or files. The initialization practically belongs to the latter, because only when the session starts, the memory gets allocated.

Secondly, const is not the only alternative to initialize the variable. Say, for instance, Xavier's initializer requires to have the whole graph structure to compute the number of incoming and outcoming connections and deduce the standard deviation from them. It just won't work if we tried to initialize the variable during the definition.

You can start it like this:

import tensorflow.contrib.eager as tfe

tfe.enable_eager_execution()

and it will save you from the boilerplate.

...