Just compute the dot-product of the encoded values with enc.active_features_. It would work both for sparse and dense representation.
For example:
from sklearn.preprocessing import OneHotEncoder
import numpy as np
orig = np.array([6, 9, 8, 2, 5, 4, 5, 3, 3, 6])
enc = OneHotEncoder()
encoded = enc.fit_transform(orig.reshape(-1, 1))
decoded = encoded.dot(ohe.active_features_).astype(int)
assert np.allclose(orig, decoded)
The key insight is that the active_features_ attribute of the OHE model, that represents the original values for each binary column. Thus we can decode the binary-encoded number by simply computing a dot-product with active_features_. For each data point, there's just a single 1 the position of the original value.
Since Machine Learning features as one of the parent domains of Scikit Learn Cheat Sheet, learning the domain would bring an enormous amount of knowledge to the student as a whole.