Wow... is this how most programmers program mathematically when using numbers with decimal points? If not, how does one generally program with mathematical accuracy? I've been reading links for a while and it seems like there isn't an easy solution to doing accurate math. Maybe Vedic Mathematics?

Floating point numbers are extremely useful, but in order to get sensible results you need to know what they are and what you are doing.

People have noted things like BCD and other techniques, but the thing you have to realize is that there is no single "right" answer to the fundamental problem that you can't represent an arbitrary real number to full precision in a computer.

BCD works well because the fractional part is of known magnitude.

Basically, floating point numbers are rational approximations to real numbers that a) have a fixed precision and b) are logarithmically distributed over a large range of magnitudes. This is

extremely useful but as you've already found out, if you just treat them as real numbers you'll run into grief easily.