Convergence Problems of SQP-Solver

Hi all,
we are solving an NLP with acados using a LINEAR_LS cost function. The model is a minimum lap time problem (MLTP) for a given race track including constraints describing vehicle- and thermo-dynamics of a race car with an electric powertrain. The race track is fed into the solver using parameters, which describe the track curvature for every discretization point.
Solving the problem for one lap on different tracks works fine. But solving the problem for multiple subsequent laps (i.e., an entire race with a temperature increase of the components of the electric powertrain) causes trouble: For some tracks, the solver converges to a solution within less than 60 SQP iterations, for other tracks it doesn’t find a solution but reaches the maximum number of NLP-iterations (1000). If we initialize the solver for the same problem that didn’t converge with some intermediate results we obtain after only 100 SQP-iterations, the solver converges within approx. 40 SQP-iterations.
It looks like the solver gets “lost” somewhere. Can this problem be caused by any step size parameter we need to set differently? Or are there different options to prevent the solver from getting lost and to solve the multiple-laps problem within one run?
Here are some solver parameters we have used: HPIPM, partial condensing, Hessian approximation with Gauss-Newton, Q = 0 (cost on states), R = 1e-3 (cost on driving/braking force input), P = 1e4 (cost on algebraic variable describing the time delta between two subsequent discretization points). We’ve used the IRK integrator. The number of discretization points for a single lap is approx. 200 with a step size of around 8 meters.
Thanks a lot in advance for your support,

Maximilian Bayerlein

1 Like

Hi @maxb,

it can definitely happen that the solver is “lost” in some region and initialization plays a crucial role.
There is some preliminary implementation of a merit function based line search algorithm, as well as the option to multiply the step with a fixed value.
If the residuals are relatively close to the tolerance in your problematic problem instances, a step size reduction might help.
Note that acados does no shifting internally, it just takes the values that are in memory from the previous run as initialization.

I am interested in investigating different, systematic initialization approaches for the acados SQP method and its QP solver.

Just some follow up questions to understand your formulation better:

  • when you write multiple-lap problem: do you mean a closed loop simulation with multiple laps, or an OCP solver that has a horizon which comprises multiple laps?
  • I guess you formulate the cost using the external cost module?



Hey @FreyJo,
thanks for your quick answer!

The problem is an OCP, whose optimization horizon comprises multiple laps.
Since the external cost module does currently not support algebraic variables (, we use the linear_ls cost module.
We have made some approaches varying the NLP step length (nlp_solver_step_length) and the number of QP iterations (qp_solver_iter_max) as you suggested. According to the residuals, which we think we found in the solver statistics using acados_solver.get_stats('statistics') as mentioned here Parameteric OCP with path constraint and LS cost, solver oscillations can be reduced but still the solver does not converge to a solution for some problems (at least in an NLP iteration range where the same problem for other race tracks was solved).
What do you think is the best way to check different solver initializations? You mention that acados reuses previous solutions stored in memory as an initialization. Since we currently do not solve a closed loop simulation but an OCP with multiple laps, acados can’t use a previous solution for initialization. Is the best way here to set states and controls to a relatively smart guess?

Is there any documentation regarding the outputs of acados_solver.get_stats('statistics'), this might help understanding, what is going on inside the solver.


1 Like

Hi Maximilian,

In order to get a better idea of what you get from get_stats("statistics"), it will hopefully be helpful to call/look at this function which encapsulates the get_stats call and prints the output in a table:

In the first call everything will be initialized with zeros by default.
I think indeed the best you can do is to initialize the states and controls with a smart guess.
Note that you can also initialize the multipliers and slack variables, I am not sure if you need that though.

What do you think is the best way to check different solver initializations?

I am not sure what you mean by that.
If you have a known optimal solution and plug it in as initial guess, acados should terminate right away and return this initialization, just as a sanity check.


Hello @FreyJo ,

thank you for all the help!

During the last weeks we were able to fix some points that complicated our problem’s convergence. The issue was a second order constraint (circularly shaped), that didn’t let the solver converge. We replaced that one with a set of linear constraints, approximating the circular one. We used the circular constraint to limit the combined vehicle acceleration in x and y directions by a fixed relation, comparable to an L2-norm. The change to an L1-norm stabilized our problem. Does this make sense to you as these linear constraints do not need to be approximated linearly to be fed into the QP solver?

Within further work, we found that the Gauss-Newton approximated hessian had problems solving a case, where only slight changes in the optimization variables were necessary to converge to an optimal solution. For the same problem, the exact hessian did a great job, but it also requires more SQP iterations for convergence in cases where less constraints are active. Can you give hints for which types of optimization problems which solver option should be prefered?

Since our optimization problem is physics-based, we are interested in the meaning of the residuals. We couldn’t find a detailed explanation on these in your paper on acados. Within the paper on HPIPM you assume a residual function but don’t further explain its application to the solver. Do these residuals in acados represent a mean residuum over all discretization points? Can you give us a hint where we can find the maths behind the calculation of stationary, inequational, equational and complementary residuals in acados?


Hey Maximilian,

Yes, that makes sense.

In general, the Gauss-Newton Hessian approximation is known to be good/close to the exact hessian, if the second derivatives are small, meaning the problem is close to linear or if the residuals are small and you already have a good fit, see
This seems to be not completely in line with what you reported.

But it is hard to quantify if

only slight changes in the optimization variables were necessary to converge to an optimal solution.

The residuals are computed as the inifinity norms over the corresponding residuals at all stages, they are computed here:


Hey Jonathan,

thanks for all your help!
We are now able to control our problem. Especially your note with the good fit for the Gauss-Newton Hessian approximation helped us a lot. Your hint regarding the residual also was very useful, since we couldn’t find any information about this in the literature.


1 Like