Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (19k points)

I was trying to match the orthogonal polynomials in the following code in R:

X <- cbind(1, poly(x = x, degree = 9))

but in python.

To do this I implemented my own method for giving orthogonal polynomials:

def get_hermite_poly(x,degree):

    #scipy.special.hermite()

    N, = x.shape

    ##

    X = np.zeros( (N,degree+1) )

    for n in range(N):

        for deg in range(degree+1):

            X[n,deg] = hermite( n=deg, z=float(x[deg]) )

    return X

though it does not seem to match it. Does someone know the type of orthogonal polynomial it uses? I tried a search in the documentation but didn't say.

To give some context I am trying to implement the following R code in python (https://stats.stackexchange.com/questions/313265/issue-with-convergence-with-sgd-with-function-approximation-using-polynomial-lin/315185#comment602020_315185):

set.seed(1234)

N <- 10

x <- seq(from = 0, to = 1, length = N)

mu <- sin(2 * pi * x * 4)

y <- mu

plot(x,y)

X <- cbind(1, poly(x = x, degree = 9))

# X <- sapply(0:9, function(i) x^i)

w <- rnorm(10)

learning_rate <- function(t) .1 / t^(.6)

n_samp <- 2

for(t in 1:100000) {

  mu_hat <- X %*% w

  idx <- sample(1:N, n_samp)

  X_batch <- X[idx,]

  y_batch <- y[idx]

  score_vec <- t(X_batch) %*% (y_batch - X_batch %*% w)

  change <- score_vec * learning_rate(t)

  w <- w + change

}

plot(mu_hat, ylim = c(-1, 1))

lines(mu)

fit_exact <- predict(lm(y ~ X - 1))

lines(fit_exact, col = 'red')

abs(w - coef(lm(y ~ X - 1)))

because it seems to be the only one that works with gradient descent with linear regression with polynomial features.

I feel that any orthogonal polynomial (or at least orthonormal) should work and give a hessian with condition number 1 but I can't seem to make it work in python.

1 Answer

0 votes
by (33.1k points)

It seems that you are looking for python implementation of R’s poly.

Python code:

import numpy as np

def poly(x, degree):

    xbar = np.mean(x)

    x = x - xbar

    # R: outer(x, 0L:degree, "^")

    X = x[:, None] ** np.arange(0, degree+1)

    #R: qr(X)$qr

    q, r = np.linalg.qr(X)

    #R: r * (row(r) == col(r))

    z = np.diag((np.diagonal(r)))  

    # R: Z = qr.qy(QR, z)

    Zq, Zr = np.linalg.qr(q)

    Z = np.matmul(Zq, z)

    # R: colSums(Z^2)

    norm1 = (Z**2).sum(0)

    #R: (colSums(x * Z^2)/norm2 + xbar)[1L:degree]

    alpha = ((x[:, None] * (Z**2)).sum(0) / norm1 +xbar)[0:degree]

    # R: c(1, norm2)

    norm2 = np.append(1, norm1)

    # R: Z/rep(sqrt(norm1), each = length(x))

    Z = Z / np.reshape(np.repeat(norm1**(1/2.0), repeats = x.size), (-1, x.size), order='F')

    #R: Z[, -1]

    Z = np.delete(Z, 0, axis=1)

    return [Z, alpha, norm2];

x = np.arange(10) + 1

degree = 9

poly(x, degree)

The first row of the returned matrix is

[-0.49543369,  0.52223297, -0.45342519,  0.33658092, -0.21483446,

  0.11677484, -0.05269379,  0.01869894, -0.00453516],

compared to the same operation in R

poly(1:10, 9)

# [1] -0.495433694  0.522232968 -0.453425193  0.336580916 -0.214834462

# [6]  0.116774842 -0.052693786  0.018698940 -0.004535159

Hope this answer helps.

Browse Categories

...