A BigDecimal is an accurate way of expressing numbers. A Double has a reliable accuracy. Going with doubles of various magnitudes (say d1=1000.0 and d2=0.001) could occur in the 0.001 being dropped collectively when summing as the variation in magnitude is so large. With BigDecimal this would not occur.
The problem of BigDecimal is that it's slower, and it's a bit more difficult to feed algorithms that way (due to + - * and / not being overloaded).
If you are trading with money, or accuracy is a must, use BigDecimal. Otherwise, Doubles tend to be good adequate.
I do suggest reading the Javadoc of BigDecimal as they do describe things better than I do here :)