I am writing a class for infinite floating point numbers. However, the conversion to the string is wrong. The code itself:

public string ToString(int precision) { BigInteger remainder; BigInteger result = BigInteger.DivRem(numerator, denominator, out remainder); if (remainder == 0) return result.ToString(); BigInteger decimals = (numerator * BigInteger.Pow(10, precision)) / denominator; if (decimals == 0) return result.ToString(); StringBuilder sb = new StringBuilder(); while (precision-- > 0 && decimals > 0) { sb.Append(decimals % 10); decimals /= 10; } return result + "." + new string(sb.ToString().Reverse().ToArray()); } 

The fact is that after the point zeroes are not written, so, when dividing 3/34 should be 0.0882 ..., but it turns out 0.882 ... The problem is evident in this line:

 BigInteger decimals = (numerator * BigInteger.Pow(10, precision)) / denominator; 

Please tell me how you can find out how many zeros should be and normalize the output.

  • you yourself multiply your degree by tens degree. BigInteger.Pow(10, precision) - tym32167

1 answer 1

Remove one check here, because it does not add the necessary zeros at the end:

 while (precision-- > 0 /* && decimals > 0 */) { sb.Append(decimals % 10); decimals /= 10; }