I read your paper with great care and I’m still not entirely sure about one aspect. Without going into to much detail about implicit/explicit Runge-Kutta methods, condensing, discretization of the constraints (that are not related to the system dynamics), I still did not get wheter the simulation of the nonlinear dynamics is done in a module, that itself has nothing to do with an optimization.
The simulation module solves the IVP of the discretized dynamics, given x_0 and the input trajectory U = [u_0,…,u_{N-1}]. Given the computed state trajectory (and stages trajectory for IRK methods) we can linearize the nonlinear dynamcis around the state and input trajectory. The resulting DT LTV system is then used in the QP subproblem (5) in the paper. We then compute U_new = U + dU, solve the IVP and so on and so on until a termination cirterion (of the original NLP) is met. Is this reasoning correct?
this is not really correct.
The simulation module (we call it integrator) rather solves the IVP given x_i, u_i .
There are N integrators that solve the IVP for each shooting interval.
What you wrote rather sounds like single shooting.
But I am not sure if this answers your question fully.
So it is rather “given the current primal iterate (state and control values over the horizon), we can linearize the nonlinear dynamics around this state and input trajectory.”
This kind of answers my question. I agree, my question sounded more like I was talking about single shooting, rather then multiple shooting.
I’ve never seen or heard that the integration is not done inside the NLP solver. What I have done so far is setting up a nonlinear OCP (/NLP) like (3) including the continuity constraints (i.e. enforcing a dynamically feasible state sequence) (3b) and then use an NLP solver (using IP or SQP algorithms) to solve it.
This can lead to an immense blow up in the problem size when using IRK methods with s stages because you have to introduce N * n_x * s optimization variables that represent the stages and additional equality constraints that represent the implicit stage equations (zero finding problem). Furthermore, condensing (of the states) does not really make sense, because you cannot eliminate the stage variables and eq. constraints.
If an SQP algorithm is used, linearizing the constraints (of the above described NLP) to obtain a QP is done in the background. In contrast, you linearize the continuity constraints by hand – as far as I understood – like this:
This would allow to eliminate the stage variables (in the QP) and their corresponding equality constraints.
Do you have some references that discuss and investigate this decoupling in detail? I would be really interested in a comparison of these two approaches.
The integration is done in the NLP solver, the integrators are part of the acados NLP solver (in case continuous dynamics are given).
As far as I know it, this is the difference between
“direct multiple shooting”: state and control variables (at the shooting nodes) are optimization variables
“direct collocation”: state, control variables (at the shooting nodes) AND all integration variables (typically called k for Runge-Kutta methods) are optimization variables.
The PhD thesis of Rien Quirynen is a very good reference for this and describes those concepts already in the abstract.
It also introduces the concept of lifted integrators, which are shown to be equivalent to direct collocation.
It would be nice if you add the source when you refer to a book/paper.
thanks for your answer. I also looked into
R. Quirynen, S. Gros, B. Houska, and M. Diehl, ‘Lifted collocation integrators for direct optimal control in ACADO toolkit’, Math. Prog. Comp., vol. 9, no. 4, pp. 527–571, Dec. 2017, doi: 10.1007/s12532-017-0119-0.
which helped a lot as well.
The screenshot I posted were my notes where I tried to make sense of how to use an IRK/collocation method without introducing the stages k/K (or as I called them Psi) as optimization variables.