I have the following code in Matlab:

fileID = fopen('input.txt','r'); formatSpec = '%f'; y = fscanf(fileID,formatSpec); step = 0.1; x0 = step; xn = length(y)*0.1; x = x0:step:xn; fitfunc = 'a + exp(x/b)+x^2/3+x' startPoints = [-1 -1]; [f2 f2_info] = fit(x',y,fitfunc, 'Start', startPoints) disp('Coefficients values: '); coeffvalues(f2) disp('Forecasts value on 600s: '); f2(600) 

This code performs a data fitting and predicts the process for 600 seconds.

The task is as follows: this code needs to be inserted into the device, so I need to convert this code into C ++ code. I see several solutions:

  1. Search for automatic MATLAB code converters to C ++ code.
  2. Find in C ++ ready-made fitting libraries, and using them, rewrite the code.
  3. To program the iterative fitting process yourself, for this you have to immerse yourself in theory.

Option 1: tried using Matlab Coder, which automatically converts the code to C. But, unfortunately, Matlab Coder does not support the fit function.

Option 2 is eliminated, because The use of third-party libraries requires a lot of memory, which the device is not enough for such frills.

There is only 3 options, but I don’t understand the theory, got into the Matlab code, there is a very complex object-oriented language (I looked at the source code of the fit function), I do not have enough competence to understand this.

Then I asked a question at enSO , where people suggested that the least squares method (OLS) is most likely used.

Google English and English-speaking Internet, but there are mainly analytical solutions, and then for the polynomials. In analytical solutions there is a minus: if you change the function of the fitting, then you need to recalculate the derivatives each time and fill in the code. Therefore, I need numerical solutions so that interpolation can be quickly tested using various functions.

Then, it seems, I found what I need: the Levenberg-Marquardt method. I read the articles on Wikipedia and on machinelearning.ru, but I can’t figure out how to practically apply it.

Maybe there are knowledgeable people who could briefly describe the algorithm of the Levenberg-Marquardt method for my function fitfunc = 'a + exp (x / b) + x ^ 2/3 + x'? Or advise any other numerical method to search for the coefficients of the fitting nonlinear function.

  • There is another option, matlab can wrap its code in a library that can be called from C ++. The question is whether Matlabovskie dll-ki can earn on the platform that you need in the end. - Andrey Golikov
  • Strictly speaking, you need to understand that you want interpolation or approximation. The difference is in how the function behaves at known points: interpolation - passes strictly through them, while between points it can bend in a wave. approximation - passes as close as possible to points but not necessarily through them. The approximation is solved by the least squares method, it is required to solve the matrix of equations, which is quite well implemented by the Newton method with the choice of the main element - the task for the laboratory - Andrey Golikov
  • Andrew, thanks for the answer. - Levandos Portos
  • Andrew, thanks for the answer. - Levandos Portos
  • Andrew, thanks for the answer. In this case, I need an approximation. As for Newton's method of choosing the main element - googling, found the Gauss method with the choice of the main element, did you mean it? - Levandos Portos

1 answer 1

Just in case, I write the formulation of the problem and what it is coming from, using the example of the 2 dimensional case:

  1. You have measurement data Xi, Yi - points taken during calibration and the like.
  2. You have a set of basic functions Fn with which you want to represent the desired dependence, the requirement for orthogonality is presented to the set of functions. The simplest set is X ^ 0, X ^ 1, X ^ 2, ... or sin ^ n (X), cos ^ n (X)

Task: You want to choose such coefficients C0, C1, ... Cn so that the function Sum (Cn * Fn (Xi)) passes as close as possible to Yi, for all i.

Solution: You take your system and substitute there all known Xi, subtract Yi from each obtained value, square it, put everything together and minimize this amount. {(Sum (Cn * Fn (X0)) - Y0) ^ 2 + (Sum (Cn * Fn (X1)) - Y1) ^ 2 ... (Sum (Cn * Fn (Xi)) - Yi) ^ 2 } -> min

To minimize it, you need to take the partial derivatives with respect to Cn and equate them all to zero, this will give you a system of n equations, having solved which you will find the desired Cn.

The system of equations is solved in any way, often it is ill-conditioned, especially when data is taken with inaccuracy, so it would be good to choose methods that can handle it. The method of Newton (Newton-Gaus) solution of a system of equations is suitable. The method of solving the choice of the main element turned out to be simply the Gaus method, it is also worth trying, but it seems to cope worse with ill-conditioned systems. Vicky says that the Levenberg-Marquardt method originally chosen by you is an extension of Newton's method.

  • The function that you pass to fit is just your set of basic functions, so naturally you have to recalculate all Sum (Cn * Fn (Xi)) - Andrey Golikov
  • Thanks for the detailed answer, with the help of it I understood what I wanted at all. - Levandos Portos
  • As a result, this numerical method still solves a system of equations. Perhaps you know if there is any method of this type: I increase one coefficient, the R criterion increases square, it means I am going in the right direction, I decrease the coefficient, the R criterion decreases, we turn around. And so just pick the best combination of coefficients, is there such a numerical method? - Levandos Portos
  • Well, you just described it :) What is the difficulty: you minimize the function of the sum of squared deviations. And this function does not necessarily have one local minimum in the segment of interest. Roughly you can take a step to the left, see that the function is growing, go to the right, find the minimum. But you can not say that after going to the left 10 steps you will not find at least less than what you found on the right. - Andrey Golikov
  • There are iterative methods, in fact, you take a step in the right direction until you get a decrease, as soon as you stop taking a step back and half a step in this direction, and so on, until you get the specified accuracy. But the question of choosing the starting point is very important, it will lead you to the nearest local minimum, and it’s not actually that this will be the absolute minimum of the function. You can search for minima from the grid of values, you can apply statistical methods, there are actually a lot of options and methods .... - Andrey Golikov