# Multivariate (polynomial) best fit curve in python?

1 view

How do you calculate a best fit line in python, and then plot it on a scatterplot in matplotlib?

I was I calculate the linear best-fit line using Ordinary Least Squares Regression as follows:

```from sklearn import linear_model
clf = linear_model.LinearRegression()
x = [[t.x1,t.x2,t.x3,t.x4,t.x5] for t in self.trainingTexts]
y = [t.human_rating for t in self.trainingTexts]
clf.fit(x,y)
regress_coefs = clf.coef_
regress_intercept = clf.intercept_      ```

This is multivariate (there are many x-values for each case). So, X is a list of lists, and y is a single list. For example:

```x = [[1,2,3,4,5], [2,2,4,4,5], [2,2,4,4,1]]
y = [1,2,3,4,5]```

But how do I do this with higher order polynomial functions. For example, not just linear (x to the power of M=1), but binomial (x to the power of M=2), quadratics (x to the power of M=4), and so on. For example, how to I get the best fit curves from the following?

Extracted from Christopher Bishops's "Pattern Recognition and Machine Learning", p.7: by (8k points)

The accepted answer to this question provides a small multi poly fit library which will do exactly what you need using numpy, and you can plug the result into the plotting as I've outlined below.

You would just pass in your arrays of x and y points and the degree(order) of fit you require into multipolyfit. This returns the coefficients which you can then use for plotting using numpy's polyval.

Note: The code below has been amended to do multivariate fitting, but the plot image was part of the earlier, non-multivariate answer.

from sklearn.preprocessing import PolynomialFeatures

from sklearn.pipeline import make_pipeline

X = [[0.44, 0.68], [0.99, 0.23]]

vector = [109.85, 155.72]

predict= [[0.49, 0.18],[0.44, 0.68], [0.99, 0.23]]

poly = PolynomialFeatures(degree=2)

X_ = poly.fit_transform(X)

predict_ = poly.fit_transform(predict)

X_ = np.delete(X_,(1),axis=1)

predict_ = np.delete(predict_,(1),axis=1)

print("X_ = ",X_)

print("predict_ = ",predict_)

For non-multivariate data sets, the easiest way to do this is probably with numpy's polyfit:

numpy.polyfit(x, y, deg, rcond=None, full=False, w=None, cov=False) Least squares polynomial fit.

Fit a polynomial p(x) = p * x**deg + ... + p[deg] of degree deg to points (x, y). Returns a vector of coefficients p that minimises the squared error.