Binary->Decimal algorithm? (floating point)

A place to discuss the implementation and style of computer programs.

Moderators: phlip, Moderators General, Prelates

User avatar
tetsujin
Posts: 426
Joined: Thu Nov 15, 2007 8:34 pm UTC
Location: Massachusetts
Contact:

Binary->Decimal algorithm? (floating point)

Postby tetsujin » Fri Mar 04, 2011 6:58 pm UTC

Hey, I was wondering if somebody could tell me a good method for converting binary floating-point to decimal (without using standard libraries, etc. obviously...)

I'm trying to create an implementation in Little Big Planet 2, using their logic gate system. So an approach that is relatively easy to implement in "hardware" is probably advantageous.

I built one by adapting the "double dabble" method to handle negative powers of two. But to perform that calculation on a 24-bit input (to 16 digit BCD output) without iteration means almost 200 stages of "(x>7)? (x-3):x" - a total of about 2000 logic gates. The implementation works but it's too resource-intensive to be actually useful.

Learning about double dabble and building that implementation of it has made me curious to know if there's a better option - I figured this would probably be a good place to find out...
---GEC
I want to create a truly new command-line shell for Unix.
Anybody want to place bets on whether I ever get any code written?

User avatar
Yakk
Poster with most posts but no title.
Posts: 11077
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Binary->Decimal algorithm? (floating point)

Postby Yakk » Fri Mar 04, 2011 7:33 pm UTC

How are you at hardware optimization? Do you have access to an optimizing HDL compiler?

I don't see where the (x>7)?(x-3):x shows up in the double dabble algorithm. I see them adding 3, not subtracting 3. So I'm confused -- is that due to your optimization?

(x>7)?(x-3):x can be implemented in ... well, not that many gates. 10 to 20? Doing it 64 times (the number of bits in a 16 digit BCD number) should take less than 2000 gates.

You aren't implementing a full adder (or subractor) for that are you? (x>7)?(x-3):x is a 4 bit to 4 bit function. Each output bit is a function of 2-4 input bits, costing 1-4 gates to calculate, probably less if you reuse partial results.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
tetsujin
Posts: 426
Joined: Thu Nov 15, 2007 8:34 pm UTC
Location: Massachusetts
Contact:

Re: Binary->Decimal algorithm? (floating point)

Postby tetsujin » Fri Mar 04, 2011 8:25 pm UTC

Yakk wrote:How are you at hardware optimization? Do you have access to an optimizing HDL compiler?


I don't know - I might have access to something like that... No idea really. Certainly large-scale integration and optimization of the whole logic circuit would be one way to deal with this. :)

I could possibly think about doing some medium-scale integration of the circuit by hand: take, say, 2 columns of 2 rows of processing stages, work out the truth table for their 8 bits of input and 9 bits of output, and optimize that...

I don't see where the (x>7)?(x-3):x shows up in the double dabble algorithm. I see them adding 3, not subtracting 3. So I'm confused -- is that due to your optimization?


The normal double dabble algorithm operates on integers. You start with high-order bits and incorporate more bits by shifting and iterating. I wasn't sure exactly how double-dabble could be applied to floating-point numbers, so I experimented until I found something that worked. Basically, it starts with low-order bits, and uses the inverse of the regular double dabble function. (In integer double dabble, the processing stage would be ((x>4)?(x+3):x)... ((x>7)? (x-3):x) is the inverse of that function.)

Basically, the farther you get from 20, the more decimal digits are affected. In the integer case, the high-order bits of the input will affect all digits of your output. (Since no power of 2 is also a power of 10...) In the floating-point case, it's the low-order bits that have wider impact (2-1 = 0.5, affecting just one digit, while 2-16 has 12 decimal digits, and any one of those, via addition with higher-order digit values, may impact digits all the way up to 10-1) - so when dealing with bits corresponding to negative powers of two, you need to start with the least-significant and work your way up to the most-significant ones. Hence, inverting the double dabble stage-processing function. I didn't do the math to know why that works (not exactly), I just experimented and discovered that it did...

(x>7)?(x-3):x can be implemented in ... well, not that many gates. 10 to 20? Doing it 64 times (the number of bits in a 16 digit BCD number) should take less than 2000 gates.


I implemented it in 9 gates, IIRC. (One of them was a three-input AND gate, so I don't know if that counts as one gate or two.)

The thing is, unrolling the "double dabble" loop and implementing it in logic gates means taking that single stage and forming a giant triangle - each successive column has one more processing stage than the one before it (until you reach the number of output digits) The first column has one stage, the last column has a number of stages corresponding to your number of output digits. Since each column incorporates one more bit of input than the one before it, the number of columns is equal to the number of input bits. The total number of processing stages is therefore (digits * (bits-digits)) + (bits * (bits + 1))/2

You aren't implementing a full adder (or subractor) for that are you? (x>7)?(x-3):x is a 4 bit to 4 bit function. Each output bit is a function of 2-4 input bits, costing 1-4 gates to calculate, probably less if you reuse partial results.


Yeah, I already optimized the logic of the 4 bit to 4 bit function... I knew from the start that implementing this thing would be very resource-intensive for the game so I did the optimization pretty much from the beginning, and refined it as much as I could later on. For instance, (x > 7) is equivalent to (x & 8), and bit zero flips when the high-bit of the input is set, so out0 = in0 XOR in3...

I was wasting a bit of logic for a while handling a few of the cases (for instance a lot of "a AND not in3 OR b AND in3") - eventually I figured out that it was more productive to think about cases where out1 and out2 didn't match in1 and in2, and use an XOR gate to flip the state of the corresponding input in those cases.

Another optimization I did was a special case for the stages that only take in one bit of input. The very first column has one processing stage, taking the lowest-order bit as in3 - so instead of implementing ((x>7)? (x-3):x) I just did ((x&8)? 5:0) - no gates at all for those 16 low-order processing stages...
---GEC
I want to create a truly new command-line shell for Unix.
Anybody want to place bets on whether I ever get any code written?

jareds
Posts: 436
Joined: Wed Jan 03, 2007 3:56 pm UTC

Re: Binary->Decimal algorithm? (floating point)

Postby jareds » Mon Mar 07, 2011 9:33 am UTC

You have indeed figured out how to convert the fractional portion of a binary number to decimal. The standard double dabble algorithm works because you're repeatedly doubling a BCD number and then adding a 1 or 0 to it by shifting in the low bit. The adjustment accounts for the fact that to double a digit d>=5 in BCD you want 16+(d*2-10)=(d+3)*2. In your case, you're repeatedly halving a BCD number and then, with your adjustment, adding a 0 or 0.5 by shifting in the high bit. Your adjustment accounts for the fact that, if you halve an odd digit, you want to add 5 to the next digit, but shifting in a 1 adds 8 instead.

However, converting a binary fraction to decimal is not the same thing as converting a binary floating-point number to decimal. It ignores the namesake of floating-point numbers, the floating binary point. Converting the mantissa while ignoring the exponent is useless. If you have Avogadro's number, and you work out that the mantissa is "1.9926" and thus the number is "1.9926*2^78", you have not taken a useful step toward converting the number to decimal.

You will need to account for the exponent, for example by multiplying your number by 10^-ceiling(x*log_10(2)), where x is your number's binary exponent, and then converting the product to decimal. Naturally this will be fairly expensive. Converting binary floating-point to decimal without using iteration is not a realistic goal. In fact, if you want a small number of gates, you are generally better off with an algorithm that takes a larger number of clock cycles but doesn't have to do as much in each cycle. Note also that normally computers convert floating-point to decimal in software, not hardware, although obviously there is nothing impossible about implementing it in hardware.

[Edit to add: If you are able to use iteration and are not concerned about speed, a decent approach for small gate count is to keep doing normal or reverse double dabble (depending on the sign of the exponent), for a number of 0 bits to account for the exponent.]

However, as a final warning I would point out that in highly resource-constrained environments, like a simple hand-held calculator, it is better and most common to do arithmetic directly in BCD. [Edit to add: Note though that it's not just the resource constraints: in calculators, a base conversion would otherwise be necessary for each operation anyway. If decimal I/O is rare, the decision might go the other way.]

For reference, How to Print Floating-Point Numbers Accurately is a classic article on converting floating-point to decimal, but in this application you shouldn't worry about doing it perfectly.

User avatar
tetsujin
Posts: 426
Joined: Thu Nov 15, 2007 8:34 pm UTC
Location: Massachusetts
Contact:

Re: Binary->Decimal algorithm? (floating point)

Postby tetsujin » Wed Mar 09, 2011 6:37 pm UTC

jareds wrote:You have indeed figured out how to convert the fractional portion of a binary number to decimal. The standard double dabble algorithm works because you're repeatedly doubling a BCD number and then adding a 1 or 0 to it by shifting in the low bit. The adjustment accounts for the fact that to double a digit d>=5 in BCD you want 16+(d*2-10)=(d+3)*2. In your case, you're repeatedly halving a BCD number and then, with your adjustment, adding a 0 or 0.5 by shifting in the high bit. Your adjustment accounts for the fact that, if you halve an odd digit, you want to add 5 to the next digit, but shifting in a 1 adds 8 instead.

However, converting a binary fraction to decimal is not the same thing as converting a binary floating-point number to decimal. It ignores the namesake of floating-point numbers, the floating binary point.


Yeah, I realized this some time after I posted my implementation: floating-point is the wrong term in this case, the value I'm getting out is 24-bit fixed point - and therefore only a full-precision representation of the (single-precision floating point) input value (x) if (0.5 <= x < 1).

However, as a final warning I would point out that in highly resource-constrained environments, like a simple hand-held calculator, it is better and most common to do arithmetic directly in BCD.


Well, the reason I'm working with binary in this case is because I'm trying to get a more precise reading of an in-game value. (It's not a very practical thing, in-game, due to the cost of all the conversion logic, but it's a fun exercise and I think it'd be fun to further refine it.)

Basically, in LBP2, there are different sensors and things that output "analog signals" - (single-precision, apparently) floating point numbers that are normally constrained to the range [-1, 1]. At times it's useful to get a reading of the value of one of these signals (for diagnostic purposes while creating things, etc.) and so people build meters and probes to do this.

The normal approach is, as you say, to just do everything in base 10. Usually pulling two digits at a time (because it would be very complicated to pull three) - then subtract the value of those digits from the source signal, and, if you want more digits, scale up by a factor of 100, and repeat the process. This does result in some precision loss, but it's a very cheap way to do it and IMO it's quite sufficient for just about any practical application.

The binary reading and decimal conversion approach was an exercise: to me playing with logic is one of the things that's most fun in LBP2 (although wiring complicated stuff can get very slow...) - and when I learned about Double Dabble I was eager to see what I could do with it. If I can find ways to make the approach work better, I'd like to - but the implementation will almost certainly remain just a curiosity at best. One of the appealing things about this approach is that there's no precision loss: If a bit is set in the source value, you can subtract the value of that bit from the source number without losing any precision. By contrast, picking apart the number by subtracting multiples of 0.1 and 0.01 and multiplying by 100 can cause precision loss at each step.

Thanks for the info, I'll take a look at that article you posted. Even if it's not useful in LBP, I'm sure it'll be enlightening.
---GEC
I want to create a truly new command-line shell for Unix.
Anybody want to place bets on whether I ever get any code written?

User avatar
Yakk
Poster with most posts but no title.
Posts: 11077
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Binary->Decimal algorithm? (floating point)

Postby Yakk » Wed Mar 09, 2011 6:53 pm UTC

Btw, why single cycle? Isn't a single cycle shorter than what a human being can see?
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
tetsujin
Posts: 426
Joined: Thu Nov 15, 2007 8:34 pm UTC
Location: Massachusetts
Contact:

Re: Binary->Decimal algorithm? (floating point)

Postby tetsujin » Wed Mar 09, 2011 8:33 pm UTC

Yakk wrote:Btw, why single cycle? Isn't a single cycle shorter than what a human being can see?


In the game game, no. Game cycles are 30Hz. You can get quite a lot done in a single cycle (for instance, large amounts of sequential logic) - but feedback or any other sources of latency introduce a 1/30 second delay.

1/30 second isn't too much, of course, but it adds up quickly. If I did the double dabble method iteratively on a 24 bit input (one iteration per cycle) it'd take almost a full second to see an update on the display. That's assuming the iteration really could be done in a single cycle: a simple implementation could take two. (If you used a pulse to trigger a D-flip-flop - it'd take one cycle for the pulse to go high and another for the pulse to go low. You could probably set it up so the flip-flop latches its input on both rising and falling edges, but that's more logic...)

I already have to introduce two frames of latency in order to read the 24 bits of data (using three 8-bit "A-D" modules - each "A-D" module introduces one frame of latency, and the output of the first is needed to calculate the input to the second...)
---GEC
I want to create a truly new command-line shell for Unix.
Anybody want to place bets on whether I ever get any code written?

User avatar
Yakk
Poster with most posts but no title.
Posts: 11077
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Binary->Decimal algorithm? (floating point)

Postby Yakk » Wed Mar 09, 2011 9:25 pm UTC

What I was thinking was a display that converged to its input. If the input is in constant flux, the output isn't accurate. If it isn't, it gets more accurate as ticks progress.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
tetsujin
Posts: 426
Joined: Thu Nov 15, 2007 8:34 pm UTC
Location: Massachusetts
Contact:

Re: Binary->Decimal algorithm? (floating point)

Postby tetsujin » Wed Mar 09, 2011 11:19 pm UTC

Yakk wrote:What I was thinking was a display that converged to its input. If the input is in constant flux, the output isn't accurate. If it isn't, it gets more accurate as ticks progress.


Well, given that this level of precision really doesn't have much practical use anyway, iteration could be a viable solution. Even if it made the meter slower, it would at least be something you could practically stick into a level to test things. At the very least it could be useful as a benchmark for testing other meters...

But the double dabble method doesn't really "converge" - you apply a fixed number of iterations (one per input bit) to reach your answer. If you apply one too many iterations, your result is garbage... If you haven't applied enough iterations, you're seeing some weirdly shifted-around representation of the low-order bits of input in the high-order digits of output... Not something that's particularly meaningful.
---GEC
I want to create a truly new command-line shell for Unix.
Anybody want to place bets on whether I ever get any code written?

User avatar
Yakk
Poster with most posts but no title.
Posts: 11077
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Binary->Decimal algorithm? (floating point)

Postby Yakk » Thu Mar 10, 2011 12:08 am UTC

Sure, you cannot use the off-the-shelf dabble.

You do know that there is no need to actually shift in dabble right? You have wires, not registers.

Anyhow, generate an approximation in decimal and binary.

Compare your binary approximation to the input. Generate a rough order of magnitude difference and direction.

Modify the decimal and binary approximation based on that.

To reduce the length of the data paths, when you do the modification output two register banks -- carry and value, where carry is "leftover" unadded carry digits that you haven't had time to factor in. The more carry iteration you do, the more accurate your value is.

When you calculate your error fudge, it has to be "smaller" than your carry propagation, so you'll converge instead of oscillating.

As a fun side effect, if it is slow enough you'll be able to see the decimal digits converge...
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

User avatar
tetsujin
Posts: 426
Joined: Thu Nov 15, 2007 8:34 pm UTC
Location: Massachusetts
Contact:

Re: Binary->Decimal algorithm? (floating point)

Postby tetsujin » Thu Mar 10, 2011 5:12 am UTC

Yakk wrote:Sure, you cannot use the off-the-shelf dabble.

You do know that there is no need to actually shift in dabble right? You have wires, not registers.


My non-iterative version is built that way, yes. It's a big (truncated) triangle of dabble circuits.

An iterative version would have to select one input bit at a time - not a shift exactly, but still a fair bit of logic...

Anyhow, generate an approximation in decimal and binary.

Compare your binary approximation to the input. Generate a rough order of magnitude difference and direction.

Modify the decimal and binary approximation based on that.

To reduce the length of the data paths, when you do the modification output two register banks -- carry and value, where carry is "leftover" unadded carry digits that you haven't had time to factor in. The more carry iteration you do, the more accurate your value is.

When you calculate your error fudge, it has to be "smaller" than your carry propagation, so you'll converge instead of oscillating.

As a fun side effect, if it is slow enough you'll be able to see the decimal digits converge...


Huh? I don't get it... Carry digits? "Carry" happens as part of the "shift"...

And how do you calculate error? Translate the decimal back into binary and subtract?
---GEC
I want to create a truly new command-line shell for Unix.
Anybody want to place bets on whether I ever get any code written?

User avatar
Yakk
Poster with most posts but no title.
Posts: 11077
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Binary->Decimal algorithm? (floating point)

Postby Yakk » Thu Mar 10, 2011 5:35 pm UTC

I'm proposing a non-dabble solution.

One of the things that makes addition expensive/slow is that you want the accurate value after one "cycle".

However, if you have a "carry break" at a certain point (say, every 3-4 bits), the complexity of your circuit drops massively. Your error in your output falls to 1/2^3 to 1/2^4, which is stored in a "incomplete carry" register.

For base 10 numbers (say, BCD) similar things apply.

If you can generate an estimated error of your BCD approximation to your input value, based off of the binary (estimate+error) register, add in a correction factor (in both BCD and binary -- in the (estimate) + (error) form), and add that to the binary and BCD estimates (folding in one iteration of error into the estimate)...

Does that make sense?

While it is much more complex than a simple generated array of dabble circuits, it might be smaller. It will definitely be "less deep", but as this is simulated hardware and not real hardware the depth probably matters less than the area.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

ghertz
Posts: 2
Joined: Wed Apr 11, 2018 1:48 pm UTC

Re: Binary->Decimal algorithm? (floating point)

Postby ghertz » Wed Apr 11, 2018 5:14 pm UTC

I know the traditional way to convert binary fractions (fixed) to decimal notation. The traditional way is below:


-------------------------------------------------------------------------------
[-A-] Traditional technique for converting decimal number 0.42
-------------------------------------------------------------------------------
step 1: 0.47 * 2 = 0.94 ; store integral part: 0
step 2: 0.94 * 2 = 1.88 ; store integral part: 1
step 3: 0.88 * 2 = 1.76 ; store integral part: 1
step 4: 0.76 * 2 = 1.52 ; store integral part: 1
step 5: 0.52 * 2 = 1.04 ; store integral part: 1
[-B-]final answer: 0.01111......


Question: Can I get the final answer [-B-] using shift operation ? I am looking for something similar to Double Dabble. I don't want to use [-A-] because I'm trying to avoid the Multiplication core in an FPGA. Multiplication core just takes extra clock cycles for each multiplication operation and I'd like to avoid that.


[-C-] Summary:
1. Decimal/BCD (whole) --- > Binary : use Reverse Double Dabble.
2. Decimal/BCD (fraction) --- > Binary : ????? (I don't want to use [-A-])
3. Binary(whole) --- > Decimal/BCD : use Double Dabble
4. Binary(fraction) --- > Decimal/BCD : use Reverse Double Dabble.


For [-C-].2, I'm trying to find/come up with a technique similar to [-C-].1, [-C-].3, and [-C-].4. Any suggestions?


Return to “Coding”

Who is online

Users browsing this forum: No registered users and 7 guests