Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Data Science by (17.6k points)

I'm trying to follow this notebook, problem is it's written for Py 2.7 and I'm trying to port it to Py 3.6. Luckily someone had ported the midi library to Py 3 https://github.com/louisabraham/python3-midi and I was successfully able to use this to parse the midi files into a numpy array. Now my problem is I'm receiving these errors

https://github.com/bhaktipriya/Blues/blob/master/Music.ipynb

TypeError                                 Traceback (most recent call last)

<ipython-input-62-f35c20bfe55b> in <module>()

      1 #backward pass, x samples drawn from prob distribution defn by (hk,w,bv)

----> 2 x_sample=gibbs_sample(2)

      3 print(x_sample)

      4 #h sampled from prob distrib defn by (x,w,bh)

      5 h=sample(tf.sigmoid(tf.matmul(x, W) + bh))

<ipython-input-57-943cbc813622> in gibbs_sample(k)

     13     #Gibbs sample(done for k iterations) is used to approximate the distribution of the RBM(defined by W, bh, bv)

     14     ct=tf.constant(0)

---> 15     [_, _, x_sample]=control_flow_ops.while_loop(lambda count, num_iter, *args: count < num_iter,gibbs_step, [ct, tf.constant(k), x], 1, False)

     16     #to stop tensorflow from propagating gradients back through the gibbs step

     17     x_sample=tf.stop_gradient(x_sample)

c:\users\ali\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py in while_loop(cond, body, loop_vars, shape_invariants, parallel_iterations, back_prop, swap_memory, name, maximum_iterations)

   3051       raise TypeError("body must be callable.")

   3052     if parallel_iterations < 1:

-> 3053       raise TypeError("parallel_iterations must be a positive integer.")

   3054 

   3055     if maximum_iterations is not None:

TypeError: parallel_iterations must be a positive integer.

I'm also getting strange errors with the shape of the numpy array in the training step

size_tr=tf.cast(tf.shape(x)[0], tf.float32)

eta=lr/size_tr

W_upd=tf.multiply(eta, tf.subtract(tf.matmul(tf.transpose(x), h), tf.matmul(tf.transpose(x_sample), h_sample)))

bv_upd=tf.multiply(eta, tf.reduce_sum(tf.subtract(x, x_sample), 0, True))

bh_upd=tf.multiply(eta, tf.reduce_sum(tf.subtract(h, h_sample), 0, True))

updt=[W.assign_add(W_upd), bv.assign_add(bv_upd), bh.assign_add(bh_upd)]

sess=tf.Session()

init=tf.initialize_all_variables()

sess.run(init)

for epoch in tqdm(range(epochs)):

            for song in songs:

                song=np.array(song)

                #reshaping song into chunks of timestep size

                chunks=song.shape[0]/timesteps

                chunks = int(np.floor(chunks))

                dur=chunks*timesteps

                dur = int(np.floor(dur))

                song=song[:dur]

                song=np.reshape(song, [chunks, song.shape[1]*timesteps])

                #Train the RBM on batch_size examples at a time

                for i in range(1, len(song), batch_size): 

                    tr_x=song[i:i+batch_size]

                    sess.run(updt, feed_dict={x: tr_x})

Error is:

    InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'x_7' with dtype float and shape [?,2340]

         [[Node: x_7 = Placeholder[dtype=DT_FLOAT, shape=[?,2340], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

1 Answer

0 votes
by (41.4k points)

The error which you are getting means that  sess.run() call depends on a placeholder which is not been fed. There is only one placeholder, x  in your code and "x_7" in the error  suggests that the placeholder x has been created multiple times.

So, use  tf.reset_default_graph() and then re-execute each of the cells in your notebook in order from top to bottom.By doing this, the error will be fixed.

Gain practical exposure with data science projects in Intellipaat's Data Science course online.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...