question 1:

Tell me, does the C ++ library <math.h> and <cmath> differ in any way (except for new functions)?

Question 2:

Sometimes there are problems of calculating over similar values, for example, exp(-(log(x) - a)*(log(x) - a)) , as a result, even a long double stops helping and it is necessary to introduce some scaling factors to improve the accuracy that is not very good.

Maybe the fact is that the functions exp, log are program-written, and not from ALU? How do people solve such problems?

  • for floating point numbers you can use ready-made libraries. boost::multiprecision::cpp_float for example. - acade

2 answers 2

<cmath> is by and large <math.h> , only wrapped in the std . Or <math.h> - functions <cmath> pulled from std ...

Problems with the accuracy of calculations are associated with the limited accuracy of floating point representations. Library functions used their “program-written” functions, except during the time of 80386, when there was still a mathematical coprocessor - it was not a pleasure on every machine. This is not the case, but that no matter how hard you try, the laws of mathematics should not be skipped, and you really need to resort to mathematical transformations for such calculations — to the extent that often changing the sequence of summations significantly changes the result.

By the way, some compilers with long double still use regular double - for example, Visual C ++. Check what you have there ...

  • Visual C ++ :) And how to make it all the same long double use? This is the native type for ALU - Zhihar
  • I think that if the compiler did not provide for it - no way ... - Harry
  • looked in MSDN - it seems long double is perceived as a regular double - Zhihar

For a long arithmetic, the classic of the genre is GMP. But generally so that there is little long double it should be a VERY specific task. Especially now, when the x86 platform has become 64-bit.

  • selection of coefficients in some distributions - so inverse degrees, logarithms, exponents, divisions, and all this is not far from zero and you can shove some kind of corrective gain factor, that's the accuracy and goes - Zhihar
  • @Zhihar In engineering practice, this is practically not found. And if it meets somewhere, then a simple float is enough for the eyes, not to mention double. You must have science, and at the level above world standards, which in itself is strange. There are two places in the world where such accuracy is required (exceeding the accuracy of a double on a 64-bit platform). Well, or you dig in the wrong direction. - pepsicoca1
  • not quite like that - I just need to pick up 6 coefficients using the least squares method for a lognormal distribution for 1000 points :) when I implemented it - I used trash, it turned out that I didn’t have enough accuracy on 3 and 6 linear coefficients, since they are linear, I entered the x1000 scale and everything got better, but the 1,2,5,6 coefficients are under the exponent and such tricks no longer pass, and sometimes it turns out exp (-1000) + ... + exp (-10) - which just cuts the accuracy of the PS strictly again - the task of research, on an industrial scale with an accuracy of 2-3 characters all satisfied - Zhihar
  • Of course, you know better what is behind your model. But still suspicious. Most likely you have a mistake somewhere. Because double on a 64-bit platform is VERY high accuracy. Or maybe your model is unstable, so small changes in the data lead to large changes in the coefficients. But if your model is unstable, then the increase in the digit capacity will not help you either. It will still go beyond the bit grid. - pepsicoca1
  • Normally saves normalization. But there are also bad data, where normalization will not save. - Mikhailo