Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)

When facing difficulties during training (nans, loss does not converge, etc.) it is sometimes useful to look at more verbose training log by setting debug_info: true in the 'solver.prototxt' file.

The training log then looks something like:

I1109 ...]     [Forward] Layer data, top blob data data: 0.343971    

I1109 ...]     [Forward] Layer conv1, top blob conv1 data: 0.0645037

I1109 ...]     [Forward] Layer conv1, param blob 0 data: 0.00899114

I1109 ...]     [Forward] Layer conv1, param blob 1 data: 0

I1109 ...]     [Forward] Layer relu1, top blob conv1 data: 0.0337982

What does it mean?

1 Answer

0 votes
by (33.1k points)

Caffe Blob data structure

Caffe uses Blob data structure to store data/weights/parameters etc. It is important to note that Blob has two "parts": data and diff. The values of the Blob are stored in the data part. The diff part is used to store element-wise gradients for the backpropagation step.

Forward pass

You will understand all the layers from bottom to top listed in this part of the log. For each layer you'll see:

I1109 ...]     [Forward] Layer conv1, top blob conv1 data: 0.0645037

I1109 ...]     [Forward] Layer conv1, param blob 0 data: 0.00899114

I1109 ...]     [Forward] Layer conv1, param blob 1 data: 0

Backward pass

All the rest of the layers are listed in this part top to bottom. You can see that the L2 magnitudes reported now are of the diff part of the Blobs.

Finally

The last log line of this iteration:

[Backward] All net params (data, diff): L1 norm = (2711.42, 7086.66); L2 norm = (6.11659, 4085.07)

reports the total L1 and L2 magnitudes of both data and gradients.

Hope this answer helps you!

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...