You misunderstand the value of precision in the Decimal class. Although Decimal designed to overcome the limitations of float (limited accuracy and the fundamental inability to accurately write down some decimal fractions, for example, 0.3), it is based on similar principles for representing numbers. Any number in Decimal written as a mantissa multiplied by the exponent:
123.456 = 1.23456 * 10^2 ^ ^ | | ΠΌΠ°Π½ΡΠΈΡΡΠ° ΡΠΊΡΠΏΠΎΠ½Π΅Π½ΡΠ°
And the precision in Decimal affects the accuracy of the decimal representation of the mantissa, and not the number of digits after the decimal point in the number. Accuracy is taken into account during mathematical operations:
with localcontext() as ctx: ctx.prec = 2 x = Decimal(123.456) print(float(x)) # 123.456 - ΠΎΠΆΠΈΠ΄Π°Π΅ΠΌΠΎ y = Decimal(457.789) print(float(y)) # 457.789 - ΠΎΠΆΠΈΠ΄Π°Π΅ΠΌΠΎ z = x + y print(float(z)) # 580.0, Π° Π½Π΅ 581.24
In this example, adding 123.456 ( 1.23456 * 10^2 ) and 457.789 ( 4.57789 * 10^2 ) results in 5.81245 * 10^2 . But since the accuracy of the representation of the mantissa we put in 2 digits, the extra digits are rounded, and it turns out 5.8 * 10^2 , that is, 580.0.
If you need to round numbers to a certain decimal place, then for this you need to use the round() function, and not the accuracy of the Decimal calculations:
a = Decimal(15100000) b = Decimal(-2125.234523452345) c = a + b print(c) # Decimal('15097874.76547654765499828500') print(round(c, 7)) # Decimal('15097874.7654765')