Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)

Given I have a linear model as the following I would like to get the gradient vector with regards to W and b.

# tf Graph Input

X = tf.placeholder("float")

Y = tf.placeholder("float")

# Set model weights

W = tf.Variable(rng.randn(), name="weight")

b = tf.Variable(rng.randn(), name="bias")

# Construct a linear model

pred = tf.add(tf.mul(X, W), b)

# Mean squared error

cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)

However if I try something like this where cost is a function of cost(x,y,w,b) and I only want to gradients with respect to w and b:

grads = tf.gradients(cost, tf.all_variable())

My placeholders will also be included (X and Y). Even if I do get a gradient with [x,y,w,b] how do I know which element in the gradient that belong to each parameter since it is just a list without names to which parameter the derivative has be taken with regards to?

In this question I'm using parts of this code and I build on this question.

1 Answer

0 votes
by (33.1k points)
edited by

For tf.gradients

This method constructs symbolic partial derivatives of the sum of ys w.r.t. x in xs.

For example:

dc_dw, dc_db = tf.gradients(cost, [W, b])

Here, tf.gradients() returns the gradient of cost wrt each tensor in the second argument as a list in the same order.

Hope this answer helps you!

Learn TensorFlow with the help of this comprehensive video tutorial:

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...