Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

I am trying to use tensorflow for implementing a dcgan and have run into this error:

ValueError: Shapes must be equal rank, but are 2 and 1

From merging shape 1 with other shapes. for 'generator/Reshape/packed' (op: 'Pack') with input shapes: [?,2048], [100,2048], [2048].

As far as I have gathered it indicates that my tensor shapes are different, but I cannot see what I need to change to fix this error. I believe the mistake hangs somewhere in between these methods:

First, I create a placeholder in a method using:

self.z = tf.placeholder(tf.float32, [None,self.z_dimension], name='z')

self.z_sum = tf.histogram_summary("z", self.z)

self.G = self.generator(self.z)

Then the last statement calls the generator method, this method uses reshape to change the tensor via:

 self.z_ = linear(z,self.gen_dimension * 8 * sample_H16 * sample_W16, 'gen_h0_lin', with_w=True)

 self.h0 = tf.reshape(self.z_,[-1, sample_H16, sample_W16,self.gen_dimension * 8])

 h0 = tf.nn.relu(self.gen_batchnorm1(self.h0))

If it helps here is my linear method:

def linear(input_, output_size, scope=None, stddev=0.02, bias_start=0.0, with_w=False):

shape = input_.get_shape().as_list()

with tf.variable_scope(scope or "Linear"):

  matrix = tf.get_variable("Matrix", [shape[1], output_size], tf.float32,tf.random_normal_initializer(stddev=stddev))

  bias = tf.get_variable("bias", [output_size],initializer=tf.constant_initializer(bias_start))

  if with_w:

    return tf.matmul(input_, matrix) + bias, matrix, bias

  else:

    return tf.matmul(input_, matrix) + bias

EDIT:

I also use these placeholders:

 self.inputs = tf.placeholder(tf.float32, shape=[self.batch_size] + image_dimension, name='real_images')

    self.gen_inputs = tf.placeholder(tf.float32, shape=[self.sample_size] + image_dimension, name='sample_inputs')

    inputs = self.inputs

    sample_inputs = self.gen_inputs

1 Answer

0 votes
by (107k points)

You just have to change this syntax:

linear(z, self.gen_dimension * 8 * sample_H16 * sample_W16, 'gen_h0_lin', with_w=True) 

to this syntax:

linear(z, self.gen_dimension * 8 * sample_H16 * sample_W16, 'gen_h0_lin', with_w=False).

As 

linear(z, self.gen_dimension * 8 * sample_H16 * sample_W16, 'gen_h0_lin', with_w=True) 

will return the tuple (tf.matmul(input_, matrix) + bias, matrix, bias).

Thus, self.z_ is assigned by the tuple, not the only one tf tensor.

If you wish to learn about TensorFlow visit this TensorFlow Tutorial.

1.2k questions

2.7k answers

501 comments

693 users

Browse Categories

...