Derivatives of optimal solutions w.r.t parameters

Hi all,

I’m wonder if there is anyway to get the derivate of the optimal solution w.r.t the parameters (preferably from Python interface, C interface is also ok). The eval_param_sens function seems to be able to calculate the sensitivity w.r.t the initial condition, but I’m wondering if more general settings exists, e.g., when parameters appears in the cost function or system dynamics.

Thanks a lot!

Hi :wave:

we just merged a pull request implementing solution sensitivities with respect to parameters! Note that this feature is currently limited to external costs and discrete dynamics and does not cover parametric constraints.

I recommend you check the example here. Note that the solution sensitivities are only correct if an exact Hessian is used. Whenever you use any form of Hessian approximation or regularization (e.g. Gauss-Newton Hessian) for solving your OCP, we recommend to create two solvers: one for solving using an approximate Hessian, and one using an exact Hessian that is used to compute solution sensitivities. This approach is also taken in the example.

Best, Katrin

2 Likes

Hi Katrin,

Glad to see this feature! I believe this would be super useful for applications combining MPC and learning, e.g. differentiable MPC.

Thanks for your work. I will have a look into that.

Best,
Fenglong

1 Like

Hi Katrin,

I’ve looked into the example and I’d like to thank you for adding this great feature. Now I’m wondering what would be the best manner if I want to have a differentiable OCP layer in neural network, for example, if I want to implement an AcadosOcpLayer which can be integrated into PyTorch and trained on GPU.

I think it’s possible to extend a PyTorch layer by subclassing torch.autograd.Function and customize forward() and backward() function with acados (forward just solves the OCP and return optimal inputs, backward evaluates the sensitivity of optimal inputs w.r.t OCP parameters). But I’m wondering if this is a good solution for training on GPU. As far as I know, acados is optimized for CPU operations. So I’m not sure how efficient it would be on GPU. Another approach might be to run the AcadosOcpLayer on CPU and keep transporting the data between CPU and GPU, which could be slow as I can imagine.

I don’t have an answer to this question, so I’d like to ask about your thoughts. I would appreciate your opinions.

Thanks and best,
Fenglong

Hi Fenglong,

I guess your proposed approach with the acados layer running on CPU and everything else on GPU is the only way to go at the moment.

One more important point to keep in mind is initialization/warm-starting. You might see a significant speed up if you manage to keep your solver warm-started, i.e. the problem parameters and the initial state constraint should not change to much from one problem to the next. This might not be trivial to achieve within your training routine.

Let me know how it goes, very interested in combining optimal control and learning-based approaches!

1 Like

One more interesting feature for your AcadosOcpLayer might be the recently added batch solver, see this example.

They allow parallelization of solves via openmp.

Thinking about the computations in the SQP, I don’t know if most of the non-QP solving operations will map well to a GPU, actually - since they probably boil down to evaluating nonlinear functions to form the matrices to be passed into the QP to compute the step size.

What might help this is that in the upcoming[1] OSQP 1.0 release we will have a CUDA backend that can do all the QP computations on an NVIDIA GPU. So once we release that and update Acados to use it, then the QP part of Acados can be done on a GPU, but you would still have the data movement between GPU<->CPU in many places.

[1] Yes, I keep saying it is upcoming - but it is almost done :smiley:

2 Likes