Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

Running some experiments with TensorFlow, I want to look at the implementation of some functions just to see exactly how some things are done, started with the simple case of tf.train.GradientDescentOptimizer. Downloaded the zip of the full source code from GitHub, ran some searches over the source tree, got to:

C:\tensorflow-master\tensorflow\python\training\gradient_descent.py

class GradientDescentOptimizer(optimizer.Optimizer):

  def _apply_dense(self, grad, var):

    return training_ops.apply_gradient_descent(

Okay, so presumably the actual code is in apply_gradient_descent, searched for that... not there. Only three occurrences in the entire source tree, all of which are uses, not definitions.

What about training_ops? There does exist a source file with a suggestive name:

C:\tensorflow-master\tensorflow\python\training\training_ops.py

from tensorflow.python.training import gen_training_ops

# go/tf-wildcard-import

# pylint: disable=wildcard-import

from tensorflow.python.training.gen_training_ops import *

# pylint: enable=wildcard-import

... the above is the entire content of that file. Hmm.

I did find this file:

C:\tensorflow-master\tensorflow\python\BUILD

tf_gen_op_wrapper_private_py(

    name = "training_ops_gen",

    out = "training/gen_training_ops.py",

)

which seems to confirm such and such other files are object code, generated in the build process - but where is the source code they are generated from?

So this is the point at which I give up and ask for help. Can anyone familiar with the TensorFlow codebase point me to where the relevant source code is?

1 Answer

0 votes
by (107k points)

For linear regression, we use a cost function known as the mean squared error or MSE. We take the squared value of our real data points minus the approximated values. Our approximated values can be calculated using the current m and b values we have: y_approx = m_current*x + b_current. After that, we add all those values up and divide them by the number of data points we have, effectively just taking the average. 

The implementation now goes to the native c++ code. Here's ApplyGradientDescent GPU

implementation (core/kernels/training_ops_gpu.cu.cc):

template <typename T>

struct ApplyGradientDescent<GPUDevice, T> {

  void operator()(const GPUDevice& d, typename TTypes<T>::Flat var,

                  typename TTypes<T>::ConstScalar lr,

                  typename TTypes<T>::ConstFlat grad) {

    Eigen::array<typename TTypes<T>::Tensor::Index, 1> bcast;

    bcast[0] = grad.dimension(0);

    Eigen::Sizes<1> single;

    var.device(d) -= lr.reshape(single).broadcast(bcast) * grad;

  }

};

CPU implementation is here (core/kernels/training_ops.cc):

template <typename T>

struct ApplyGradientDescent<CPUDevice, T> {

  void operator()(const CPUDevice& d, typename TTypes<T>::Flat var,

                  typename TTypes<T>::ConstScalar lr,

                  typename TTypes<T>::ConstFlat grad) {

    var.device(d) -= grad * lr();

  }

};

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...