Use the following code:
proj = pca.inverse_transform(X_train_pca)
Here you do not have to worry about how to do the multiplications.
The output after pca.fit_transform or pca.transform is usually called the "loadings" for each sample, meaning how much of each component you need to describe it best using a linear combination of the components _.
The projection you are aiming at is back in the original signal space. This means that you need to go back into signal space using the components and the loadings.
pca.fit estimates the components:
from sklearn.decomposition import PCA
import numpy as np
from numpy.testing import assert_array_almost_equal
#Should this variable be X_train instead of Xtrain?
X_train = np.random.randn(100, 50)
pca = PCA(n_components=30)
pca.fit(X_train)
U, S, VT = np.linalg.svd(X_train - X_train.mean(0))
assert_array_almost_equal(VT[:30], pca.components_)
pca.transform calculates the loadings as you describe
X_train_pca = pca.transform(X_train)
X_train_pca2 = (X_train - pca.mean_).dot(pca.components_.T)
assert_array_almost_equal(X_train_pca, X_train_pca2)
pca.inverse_transform obtains the projection onto components in signal space you are interested in
X_projected = pca.inverse_transform(X_train_pca)
X_projected2 = X_train_pca.dot(pca.components_) + pca.mean_
assert_array_almost_equal(X_projected, X_projected2)
You can now evaluate the projection loss
loss = ((X_train - X_projected) ** 2).mean()
Hope this answer helps you!
If you want to know more about Machine Learning then watch this video. Also, a broader version of this will be covered when one undergoes a Machine Learning Certification by reputed experts.