I am assuming that what you mean with the first 2 models is reg1 and reg2. Let us know if that is not the case.
A linear regression has the same predictive power if you normalize the data or not. Therefore, using normalize=True has no impact on the predictions. One way to understand this is to see that normalization (column-wise) is a linear operation on each of the columns ((x-a)/b) and linear transformations of the data on a Linear regression does not affect coefficient estimation, only change their values. Notice that this statement is not true for Lasso/Ridge/ElasticNet.
So, why aren't the coefficients different? Well, normalize=True also takes into account that what the user normally wants is the coefficients on the original features, not the normalised features. As such, it adjusts the coefficients. One way to check that this makes sense is to use a simpler example:
# two features, normal distributed with sigma=10
x1 = np.random.normal(0, 10, size=100)
x2 = np.random.normal(0, 10, size=100)
# y is related to each of them plus some noise
y = 3 + 2*x1 + 1*x2 + np.random.normal(0, 1, size=100)
X = np.array([x1, x2]).T # X has two columns
reg1 = LinearRegression().fit(X, y)
reg2 = LinearRegression(normalize=True).fit(X, y)
# check that coefficients are the same and equal to [2,1]
np.testing.assert_allclose(reg1.coef_, reg2.coef_)
np.testing.assert_allclose(reg1.coef_, np.array([2, 1]), rtol=0.01)
Which confirms that both methods correctly capture the real signal between [x1,x2] and y, namely, the 2 and 1 respectively. For more details on this study the Linear Regression blog.