nuntius wrote:If you must store money with a binary type, its probably better to store integers representing cents (or tenths or hundredths of a cent -- i.e. fixed-point) than to store doubles representing dollars. For addition and subtraction, it doesn't matter; but for multiplication (interest, taxes, etc), proper rounding is critical.
Right, when I referred to using floats, obviously I meant that you can use floats to exactly represent integers which represent (fractions of a ) cents or whatever.
However, rounding is also something you have to be careful with. FPUs generally have various possible settings for this, and different floating point standards have different ways of dealing with it. If you're writing code with currency computations, you'll need to familiarize yourself with how to make your system behave in the right way to match the definitions your system must adhere to, which as I understand it are not universal (but I could be wrong about that).
All in all, floats probably are the wrong way to go about this --- but not for the reasons mostly given here. Many programmers are fundmentally pretty confused about what floating point numbers are. Many are also a bit confused about currency computations, if they've thought about them at all. Putting those two together can't help.