2 views

I am doing linear regression with multiple variables/features. I try to get thetas (coefficients) by using the normal equation method (that uses matrix inverse), Numpy least-squares numpy.linalg.lstsq tool and np.linalg.solve tool. In my data, I have n = 143 features and m = 13000 training examples.

For normal equation method with regularization I use this formula:

Regularization is used to solve the potential problem of matrix non-invertibility (XtX matrix may become singular/non-invertible)

Data preparation code:

import pandas as pd

import numpy as np

path = 'DB2.csv'

data.insert(0, 'Ones', 1)

cols = data.shape[1]

X = data.iloc[:,0:cols-1]

y = data.iloc[:,cols-1:cols]

IdentitySize = X.shape[1]

IdentityMatrix= np.zeros((IdentitySize, IdentitySize))

np.fill_diagonal(IdentityMatrix, 1)

For least squares method I use Numpy's numpy.linalg.lstsq. Here is Python code:

lamb = 1

th = np.linalg.lstsq(X.T.dot(X) + lamb * IdentityMatrix, X.T.dot(y))[0]

Also I used np.linalg.solve tool of numpy:

lamb = 1

XtX_lamb = X.T.dot(X) + lamb * IdentityMatrix

XtY = X.T.dot(y)

x = np.linalg.solve(XtX_lamb, XtY);

For normal equation I use:

lamb = 1

xTx = X.T.dot(X) + lamb * IdentityMatrix

XtX = np.linalg.inv(xTx)

XtX_xT = XtX.dot(X.T)

theta = XtX_xT.dot(y)

As you can see the normal equation, least squares, and np.linalg.solve tool methods give to some extent different results. The question is why these three approaches give noticeably different results and which method gives more efficient and more accurate results?

by (33.1k points)

I think you don’t need matrix inverse to solve linear systems. It's slow and introduces unnecessary errors.

Try to understand the mathematical concepts behind the following part:

x = A^-1 * b

x = np.linalg.solve(A, b)

In your case, you want something like:

XtX_lamb = X.T.dot(X) + lamb * IdentityMatrix

XtY = X.T.dot(Y)

x = np.linalg.solve(XtX_lamb, XtY);

To know more study Python Numpy Tutorial