Help please write on si. It seems not difficult, but I don’t have any problems with math.
- 2I may be mistaken, but perhaps this task is solved by nesting. (The recurrent formula) - Sergey
- oneDo you happen to study at school? Or are you a humanist? Otherwise, they would know about the limit of the sequence and about the convergence of this sequence. Cauchy convergence criterion According to it: 1 + 1/2 + 1/4 + 1/8 + ... + 1 / (2 * n) + .... converges, since | 1 / (2 * n) - 1 / (2 * n + 2) | = | 2 / ((2 * n + 2) * 2 * n) | -> 0. The limit of this "sum of members of an infinite series" seems to be 2, although it is not sure if it is too lazy to count. And another simple example that will break your logic: 1 + (-1) + 1 + (-1) + .... = 0 - BOPOH
- one@BOPOH, the series (-1) ^ n diverges and has no limit. - dzhioev
- What??? And why is that? UPD: although it may be understood what you mean - if n is given, then the result, -1, 0 or +1, depends on this n. Did I understand you correctly? By the way, my last example does not satisfy the Cauchy condition. Just "Common sense tells me ...")) - BOPOH
- I'm talking about the series 1 + (-1) + 1 + (-1) + ... (this is the series (-1) ^ n). You wrote that "1 + (-1) + 1 + (-1) + .... = 0" What did you mean by that? - dzhioev
3 answers
Well, look.
To begin with, your amount is incorrectly disclosed: for n = 0
we have x^0 / 0!
, i.e. 1
, not x
.
So, the initial value of the term is 1
. To go to the new term on the ith iteration, it is necessary to multiply by x^2
and divide by (2*i - 1) * (2 * i)
. Further clear?
By the way, exponentiation is not necessary, if done, as described.
On the other hand, it is known that, for better accuracy of calculations, it is necessary to begin the summation with the smallest modulo numbers. That is, to consider better from the end . To do this, turn one cycle into two:
- cycle from
i = 0
,x = 1
until|x| >= eps
|x| >= eps
increasei
,x
multiply byx0 * x0
and divide by(2*i - 1) * (2 * i)
- cycle while
i >= 0
increase the sum byx
, dividex
byx0 * x0
and multiply by(2*i - 1) * (2 * i)
, decreasei
As correctly noted in the comments, calculating the terms twice (once on the way to the right, and once on the way to the left), we lose accuracy. The solution is to memorize the calculated elements (for this, it would be nice to have an automatically growing container, like std::vector
in C ++), or to do three passes: one to calculate the number of terms, the second to calculate the terms themselves and remember them, the third to sum up from the end. Instead of the first pass, it is possible, in theory, to roughly estimate the number of items by the exit condition ( |x|^n/n! < eps <=> n ln |x| - ln n! < ln eps
), using the Stirling formula .
- one>> for better accuracy of calculations, it is necessary to start summation with the smallest modulo numbers. And this is interesting. Where can read about it? Whip? It’s just doubtful somehow, if large numbers (i.e., the first in comparison with the last terms) affect the accuracy of the result, then why adding them not at the beginning, but at the very end also will not affect the accuracy of the result? Or it means something like (for example): 0.004 + 0.004 (0.008) + 0.004 (0.01) + 1 (1.01) = 1.01 1 + 0.004 (1.00) + 0.004 (1.00) + 0.004 (1.00) = 1.00 - BOPOH
- oneYes, something like that. You can try to run the programs for these two algorithms: direct and reverse summation. - gecube
- 2@VladD, your second algorithm, summarizing from the end, may be even worse in accuracy, because, as you know, the more actions, the more signs are lost. A good way to avoid this is to save the intermediate members of the sequence into an array, but the question is how much memory it takes. - gecube
- 2stackoverflow.com/questions/6699066/… - Costantino Rupert
- one> for better accuracy of calculations, it is necessary to start summation with the smallest modular numbers of ooo, thank you) learned new things for yourself) and agree with this statement in principle :)> the more actions, the more signs this problem can be bypassed either by an array or by a full calculation the next member of the sequence :) we either lose time or memory, or it is necessary to accept errors :) - IVsevolod
@VlaD , comments are over, I decided to lay out an example on python
# решение в лоб, суммируем слева направо def check(x = 0, eps = 0): sumPart = 1. curSum = 0 n = 1 while sumPart > eps: curSum += sumPart sumPart = sumPart * x * x / (n * (n + 1)) n += 2 return curSum # решение через рекурсию (по формуле @dzhioev ), суммирование справа налево def calc(x = 0, eps = 0, n = 1, mulPart = 1): sumPart = 1. * (x * x) / (n * (n + 1)) stepSum = mulPart * sumPart if stepSum < eps: return 1 return 1 + sumPart * calc(x, eps, n + 2, stepSum) # хвостовая рекурсия, суммирование слева направо def tailLRSum(x = 0, eps = 0, n = 1, mulPart = 1, result = 1): currentSumPart = 1. * (x * x) / (n * (n + 1)) stepSum = mulPart * currentSumPart if stepSum < eps: return result return tailLRSum(x, eps, n + 2, stepSum, result + stepSum) # хвостовая рекурсия, суммирование справа налево (используем lambda) def tailRLSum(x = 0, eps = 0, n = 1, mulPart = 1, result = None): sumPart = 1. * (x * x) / (n * (n + 1)) stepSum = mulPart * sumPart if result == None: result = lambda value: value if stepSum < eps: return result(0) return tailRLSum(x, eps, n + 2, stepSum, lambda value: result(1 + sumPart * value))
As you can see - each method gives its results.
The first three are almost the same, but the last one behaves strangely - its accuracy is not eps, but 10 * eps. Those. if he is checked if if stepSum <eps * 0.1, then the accuracy will be what is needed (output in brackets). And in this case (ie, with eps * 0.1), the accuracy will be higher than that of the others.
For myself, I realized that I don’t know how everything works inside. After all, in fact (at least for me), the second and fourth options should be worked out in the same way - first, the right terms are calculated, then the left ones are added. Similarly, the first and second - first calculated the left, then the right terms. But it can be seen in some place (for me not obvious) there is a loss of accuracy. And somewhere in the last method, in general, something not good is going on ...
As for arbitrary tail recursion, give an example - I will look, and I will not say anything, because I do not know.
Is there anyone who can comment on the results?
- By the way, tail recursion is absent in python. I gave an example so that you can port to languages where it is. - BOPOH
Method of nesting procedures
- In order not to litter the upper answer I continue here: @dzhioev, dear, I did not just make an abstraction for this moment. Look at the formula and figure out for yourself what is calculated in your recursion function. - Sergey
- @VladD, if you think that a unit is added - you don’t understand the formula, do NOT see the unit yourself - Sergey
- oneI do not know why I am writing this, because I am sure that you are wrong. I will write you a recursive function for the calculation (recursion, by the way, only hurts, because it uses O (n) memory, unlike iterative calculation), without a stopping condition, and you write me a condition: double f (double x, int n) {if (/ * condition you must write * /) {return 0.0; } else {return 1.0 + x * x / (2 * n - 1) / (2 * n) * f (x, n + 1); }} Forward. - dzhioev
- x * x / (2 * n - 1) / (2 * n) <E and for some reason it seems to me that you have an error in the formula, since the formula is usually broken in the plus. the title also lacks variable accuracy (E) for comparison. - Sergey
- 2@ Sergey: This is wrong. The question clearly states the stopping criterion:
x^2n / (2n)! < eps
x^2n / (2n)! < eps
. - VladD