RBM is a Stochastic Neural Network which implies that each neuron will have some random behavior when activated. There are two separate layers of bias units (hidden bias and visible bias) in an RBM. This is what makes RBMs different from autoencoders. The hidden bias RBM produces the activation on the forward pass and the visible bias helps RBM to reconstruct the input during a backward pass. The reconstructed input is always different from the actual input as there are no connections among the visible units and therefore, no way of transferring information among themselves.
For reconstruction, firstly, the input data is clamped to visible units and hidden states are estimated by using the model's weight. In the second step, the visible units are calculated by using recently calculated hidden states. Visible states that you get in the second step are reconstructed samples. A comparison of the input data and reconstructed specimen(element-wise comparison) gives a reconstruction error.