Hi, I’m using the python interface for NMPC. My problem has two nonlinear inequality constraints, one of which always active (in this setting) and the other becomes active at some point of the simulation.
Using QPOASES, once the system approaches the activation of the second constraint, the solver seems to be stuck in the local minimum and runs out of iterations (whatever maximum number I set). If I let the simulation go, the second constraint is simply not satisfied.
Using HPIPM (for which I have to use the RTI as for some reason with my problem I have never been able to use the full SQP, which is stuck in the first iteration, even with an optimal initialization) the second constraint is satisfied but I get A LOT of spikes everiwhere, I suspect due to the fact that the interior point method does not work well with an always active constraint and RTI.
Now things get interesting, If I decrease the number of steps in the horizon, QPOASES works almost perfectly. Note that it is related only to the number of steps and not to the time lenght of the horizon, as increasing the time step yields the same results.
This seems to indicate that while a solution exists, QPOASES is not able to find one once the number of steps N is larger than some number (e.g. does not work for 40 but works for 20) even if the effective lenght of the horizon is the same. I understand that the larger the number of steps the larger the number of linearization points, so I’m introducing more possibilities to make errors in the nonlinear optimization, but is there something I can do to improve the behaviour of acados? (Condensing does not seem to make any difference)
Hi Jonathan
the problem is still present, although I have partially avoided it by lowering my control frequency, which with the same horizon gives me less linearization points.
I have just tried it and it solves the remaining problems I still had with the lower control frequency, Thank you very very much!
I did not think it would do any difference since I noticed that even in “normal” condition the system was performing almost always the maximum number of qp steps, so I thought it was normal, but that’s it!
Let me first say that I’m using the statistics provided by the print_statistics() method, with a maximum nlp iterations of 100.
With HPIPM, by stuck I mean that from the very first iteration at each internal step the solver reports a residual of the complementarity condition of 1 (or 0.1 for a different simulation which gives the same problem), then after 100 steps it runs out of iterations. The solution which comes from this if I still accept the last step is not completely absurd but is not usable.
What I meant with spikes is that there are large variations in the resulting control inputs, which results in a discontinuous velocity and position profile. Since in my problem the system is always near the activation of the constraints (by design), I guess that the RTI is not able to “smoothen” the big inputs (since there are no additional steps) that I probably get from the fact that HPIPM is based on the interior point method (which I guess explains it since the cost explodes near the bounds, but I’m not an expert).
It really seems like the problem is not well suited for HPIPM.
However, the residual should really be smaller if you initialize with an optimal solution.
More precisely, initializing all variables ‘x’, ‘u’, ‘pi’, ‘lam’, ‘t’ via Interfaces Overview — acados documentation
Anyway, I guess you are happy with how it works now with qpOASES.
Yes, that’s what I figured as well.
For the record, I had only initialized x and u from the optimal solution, so that may be why it still didn’t solve much.
As you say, for now I’m happy with QPOASES.
Thanks again for your help and the work you do on acados.