Sensitivity feedback with Gaussian Newton and 'NONLINEAR_LS'

Hi :wave:
I’m implementing a whole body distributed mpc with acados for legged system see the
paper if you are interested.
I want to use the sensitivity to the initial condition as a feedback gain to be used in a faster loop. Still, I saw that the function to get the sensitivity requires linear cost or exact hessian. There is any way to get around these limitations? The evaluation of the exact hessian is too slow for my application.

Thanks!

Hi lamatucci :wave:

great that you are using acados for your research!

Unfortunately, there is no way around using an exact Hessian for computing solution sensitivity. This is not a limitation of acados, but rather due to the underlying math. The solution sensitivities are given by the implicit function theorem which assumes an exact Hessian of the Lagrangian. If you use some Hessian approximation, you might end up with horribly wrong derivatives. Worst case they might even have the wrong sign.

The only thing I can recommend in order to speed up computations is to use two solvers: One solver with an inexact Hessian for solving the NLP and one exact Hessian solver for computing solution sensitivities. After solving the NLP with the first solver, you can initialize the second solver at the solution, perform one more QP solve and afterwards compute solution sensitivities.

This is also the approach implemented in this example.

Hope this is helpful!
Best, Katrin

Hi and sorry for reviving this rather old post,
but I was also looking for this same information and believe I found something better: calling solver_get with the field set to "K" should return the desired feedback gain matrix.
Under the hood, the function d_ocp_qp_ipm_get_ric_K from the HPIPM library gets called. See also this discussion for further details of the implementation: Extracting feedback-gain matrices from d_ocp_qp_ipm_ws · Issue #76 · giaf/hpipm · GitHub
From what I gather, the calculation is based on the riccati recursion and therefore does not depend on an exact hessian.
Please correct me if I am wrong. I indent to uses this in the near future and will report back then.

Hi :waving_hand:

the K matrix that you will get from HPIPM suffers from the same problem. Indeed the K will be (almost) the same as what you get from the function eval_solution_sensitivity (The K that you get from HPIPM is based on the previous linearization, while the eval_solution_sensitivity function performs an extra linearization at the current iterate. This shouldn’t be much of a difference if the iterates converged).

We however worked a lot on the solution sensitivities recently and the two solver approach as implemented in these examples is now a lot faster than before (publication with more details is coming soon :rocket:)

Looking forward to your results!

Best, Katrin

Thanks for the super quick reply!
Will try out soon and report back.
Looking forward to the new paper.

1 Like