Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
+2 votes
2 views
in Machine Learning by (4.2k points)

By setting the bottom and the top blob to be the same we can tell Caffe to do the "in-place" computation to preserve memory consumption.

Currently, I know I can safely use in-place "BatchNorm""Scale" and "ReLU" layers (please let me know if I'm wrong). While it seems to have some issues for other layers (this issue seems to be an example).

When to use in-place layers in Caffe?
How does it work with back-propagation?

1 Answer

+2 votes
by (6.8k points)

As you well noted, in-place layers do not typically work "out of the box".

For some layers, it is quite trivial ("ReLU" and other neuron activation layers).

However, for others, it requires special handling in the code. For example, the implementation of the "PReLU" layer has a specific cache bottom_memory_ member variable that stores information needed for a backdrop.

You can see a similar code for other layers that specifically test for if (top[0] == bottom[0]) to see if the layer is used in an "in-place" case.

Moreover, it makes very little sense to own an in-place layer that the input and output are of various shapes, so layers like "Convolution", "InnerProduct", "Pool" don't seem to be thought of as candidates for "in-place" layers.

For more details, and for more insights, Machine Learning will have quite an effect on aspirants. Machine Learning Algorithms also have an adverse effect as far as making a career in the software domain is concerned. 

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...