Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)

I am trying to save Neural Network weights into a file and then restoring those weights by initializing the network instead of random initialization. My code works fine with random initialization. But, when I initialize weights from the file it is showing me an error TypeError: Input 'b' of 'MatMul' Op has type float64 that does not match type float32 of argument 'a'. I don't know how do I solve this issue. Here is my code:

Model Initialization

# Parameters

training_epochs = 5

batch_size = 64

display_step = 5

batch = tf.Variable(0, trainable=False)

regualarization =  0.008

# Network Parameters

n_hidden_1 = 300 # 1st layer num features

n_hidden_2 = 250 # 2nd layer num features

n_input = model.layer1_size # Vector input (sentence shape: 30*10)

n_classes = 12 # Sentence Category detection total classes (0-11 categories)

#History storing variables for plots

loss_history = []

train_acc_history = []

val_acc_history = []

# tf Graph input

x = tf.placeholder("float", [None, n_input])

y = tf.placeholder("float", [None, n_classes])

Model parameters

#loading Weights

def weight_variable(fan_in, fan_out, filename):

    stddev = np.sqrt(2.0/fan_in)

    if (filename == ""):

        initial  = tf.random_normal([fan_in,fan_out], stddev=stddev)

    else:

        initial  = np.loadtxt(filename)

    print initial.shape

    return tf.Variable(initial)

#loading Biases

def bias_variable(shape, filename):

    if (filename == ""):

     initial = tf.constant(0.1, shape=shape)

    else:

     initial  = np.loadtxt(filename)  

    print initial.shape

    return tf.Variable(initial)

# Create model

def multilayer_perceptron(_X, _weights, _biases):

    layer_1 = tf.nn.relu(tf.add(tf.matmul(_X, _weights['h1']), _biases['b1'])) 

    layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1, _weights['h2']), _biases['b2'])) 

    return tf.matmul(layer_2, weights['out']) + biases['out']  

# Store layers weight & bias

weights = {

'h1':  w2v_utils.weight_variable(n_input, n_hidden_1,    filename="weights_h1.txt"),

'h2':  w2v_utils.weight_variable(n_hidden_1, n_hidden_2, filename="weights_h2.txt"),

'out': w2v_utils.weight_variable(n_hidden_2, n_classes,  filename="weights_out.txt") 

}

 biases = {

'b1': w2v_utils.bias_variable([n_hidden_1], filename="biases_b1.txt"),

'b2': w2v_utils.bias_variable([n_hidden_2], filename="biases_b2.txt"),

'out': w2v_utils.bias_variable([n_classes], filename="biases_out.txt")

}

# Define loss and optimizer

#learning rate

# Optimizer: set up a variable that's incremented once per batch and

# controls the learning rate decay.

learning_rate = tf.train.exponential_decay(

    0.02*0.01,           # Base learning rate. #0.002

    batch * batch_size,  # Current index into the dataset.

    X_train.shape[0],    # Decay step.

    0.96,                # Decay rate.

    staircase=True)


 

# Construct model

pred = tf.nn.relu(multilayer_perceptron(x, weights, biases))

#L2 regularization

l2_loss = tf.add_n([tf.nn.l2_loss(v) for v in tf.trainable_variables()])

#Softmax loss

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y)) 

#Total_cost

cost = cost+ (regualarization*0.5*l2_loss)

# Adam Optimizer

optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost,global_step=batch)


 

# Add ops to save and restore all the variables.

saver = tf.train.Saver()

# Initializing the variables

init = tf.initialize_all_variables()

print "Network Initialized!"

ERROR DETAILS enter image description here

1 Answer

0 votes
by (33.1k points)

The tf.matmul() method does not perform automatic type conversions, so both the inputs must have the same element type. The error message in your output indicates that you have a call tf.matmul(), where the first argument has type tf.float32, and the second argument has type tf.float64. You should also convert one of the inputs to match the other.

For example:

tf.cast(x, tf.float32).

In your code, tf.float64 tensor is explicitly created. The np.loadtxt(filename) calls, that might be loading an np.float64 array. You can explicitly change them to load np.float32 arrays as follows:

initial = np.loadtxt(filename).astype(np.float32)

Hope this answer helps.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...