Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)
I'm using Linear regression from scikit-learn. It doesn't provide gradient descent. I have not seen many questions on stackoverflow to implement linear regression with gradient descent.

How do we use Linear regression from scikit-learn or pandas in real world? OR Why does scikit-learn or pandas doesn't provide gradient descent info in linear regression output?

1 Answer

0 votes
by (33.1k points)

The scikit-learn has two approaches to linear regression:

1) LinearRegression object uses Ordinary Least Squares solver from scipy, as LR is one of two classifiers that have a closed-form solution. You can actually learn this model by just inverting and multiplicating some matrices.

2) SGD Classifier is an implementation of stochastic gradient descent, a quite generic one where you can choose your penalty terms. To obtain linear regression you choose loss to be L2 and penalty also to none or L2 (Ridge regression).

There is no "typical gradient descent" because it is rarely used in practice. If you can decompose your loss function into additive terms, then the stochastic approach is known to behave better and if you can spare enough memory - the OLS method is faster and easier.

Hope this answer helps you!

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...