Well, here is what I think. BCD do have speed advantages over rationals and bignums under certain circunstances.
If you use rationals and make too many operations (without rounding the result), its numerator and its denominator usually grow indefinitelly (since the probability of the division to be simplifiable is low). Not to mention that a generic sum of rationals need two multiplications (to equalize the denominator of the fractions), a calculation of a gcd and two divisions of integers (to simplify the resulting fraction - divide it by the gcd). The result of every operation is potencially bigger than its operands. This means consing and having bigger and bigger numbers. If the numbers are growing, it means they are chewing up memory and operations with then get slower.
An alternative approach to this problem is to work with a structure like this:
to represent the number
val*10^exp . With the appropriate roundings, it would be as realiable as BCDs, but this can be a speed headache for big numbers as well, since, for instance, every time you sum two numbers, you need to calculate a power (say, to sum 3*10^-3 with 2*10^-100 you need to multiply 3 with 10^97, and therefore you need to calculate 10^97 (unless you store a vector with the results, but this may chew up your memory pretty fast)). A power is a slow operation compared to a sum. Also, when you operate on two numbers, you will need to divide the value by the bigger power of 10 possible (so the result can ocuppy as little memory as possible and so that operations with these numbers are faster).
And, for the last in this analisys, bignums. If you are using bignums supposing that every number is a multiple of some pre-defined epsilon (say, 10^-100). I can see two problems. First, as the name says, epsilon is predefined. If you change it, it will ruin you available data. Second, if you want a small epsilon, then the representation of small integers will be multiples of a big quantity (in the example, 10^100), therefore slowing you computations for normal and big numbers and will chew up memory.
OTOH, a BCD float (using the same struct as before, with a BCD
val and a base 10
exp) will be much more realiable. A sum will need a shift and a bignum-like sum (which is O(log(n)) just like a normal bignum sum) and a final shift to normalize the number. A multiplication will need a sum (of the exponents) and a bignum-like multiplication (which is O(log(n)* log(m)) when you multiply n by m), and then a normalization. And so on. It will also occupy as much memory as you need, not more, if you round them appropriately.
Ok, all these issues only appear if you are working with big numbers or need a big precision. But, in these case, I believe BCD would be faster than all these solutions in general operations. And, in general, it shouldn't be much slower than regular integers and rationals.
Just a final idea I've just had right now. Instead of assuming that 4 bits represent a digit, you can separate the number every 32 bits like if the numbers where coded in base 10^9 instead of base 10 (10^9 ocuppy 30 bits, since 2^30 bytes = 1 Mib ~ 1*10^9 bytes). (Hum, here 64 bits architecture win because a 64-bit number can represent a 19-digit decimal, and you don't just throw away any bits). For instance, if you have a bignum with 2 fields of 32 bits, then
0x0000 0001 0000 0000
represents the number 10^9, while
0x0000 0000 0000 000A
is valid and represents the number 10 as usual. This way you gain one more digit every 32 bits than normal BCDs and you can use hardware sum of numbers of 32 bits. But you throw away 2 precious bits. And the
exp value of the struct could represent a power of 10^9 as well.It is even not hard to use, say, SBCL's internals to operate on bignums this way in a very similar way normal bignums are already implemented. Operations will be very much alike, except the normalization, so I bet the time taken for operations will not be greater than a factor of 1.5 that normal bignum operations take (bignums representing integers, not in a structure like the one above).
Well, I guess I'll just save you all from reading an even longer post. This seems to be a neat project for lisp, perhaps I'll do it during my vacations