RTI phases in Python interface

Hi, I’m using the python interface of acados to do NMPC.
I’m wondering if it is possible to make use of SQP_RTI directly from python to control my robot.
Right now I’m just simulating my system following this example

# preparation rti_phase
acados_solver.options_set('rti_phase', 1)
status = acados_solver.solve()

# update initial condition
acados_solver.set(0, "lbx", x0)
acados_solver.set(0, "ubx", x0)

# feedback rti_phase
acados_solver.options_set('rti_phase', 2)
status = acados_solver.solve()

from https://github.com/acados/acados/blob/master/examples/acados_python/rsm_example/generate_c_code.py

I assume that the two phases refer to the preparation phase of RTI (to do before the current state is available) and the second phase is when the solution is “adapted” to the current state. Is it correct?

In my simulations I find that the second phase is the most time consuming, I would expect the opposite, but is it something you would deem normal?


Exactly. The rti_phase value has the following 3 values:
(1) preparation, (2) feedback, (0) both.

About how long the phases should take and if everything that can be is carried out in the preparation phase, I am not sure.
I also don’t know how the ratio for preparation and feedback time typically is. If you can point to some paper, where you got your intuition from, that would be interesting.


Thanks Jonathan,


This is the first article that comes to my mind.

I don’t think they explicitly say that the preparation phase should be more time consuming than the feedback phase. Surely the feedback phase consists in a QP and therefore will always require some time, while the preparation phase may not be “necessary” if it were a linear MPC.

I guess what suggested me that idea was figure 3, but that’s just a visualization so I shouldn’t read too much into only that.
I wanted to be sure that I was setting the solver properly because I would have expected the feedback phase to take less, being it “only” a QP, but the more I think about it the more I understand that that’s just what it is.

Thank you again,

1 Like

Hi Tom and Jonathan,

I would like to implement RTI scheme separating preparation and feedback phase on my robot. I have implemented interface in c++ to call c generated acados solver.

In my case phase 1 takes generally more time than phase 2

I was wondering if you managed to implement this on your robot with python.

Moreover do you use multi threading to run phase 1 while sending controls to the robot and waiting for feedback for phase 2? Can you please let me know what is the correct way in this scenario?


Hi Jay,

This is good I guess.
The idea is to carry out as much as possible in the preparation phase to have a very fast feedback phase.

By “waiting for feedback for phase 2”, I guess you mean waiting for the new state estimate in order to carry out the feedback phase.
I think Figure 1.7 in this PhD thesis gives a good on how preparation and feedback should be carried out.


Hi Jonathan,

yes exactly.

Thanks for the reference! As discussed on page no. 46 is the preparation phase parallelized in acados for multi core platforms?


I guess you refer to this sentence?

The typically more CPU intensive preparation phase is performed with a predicted state, before the current state estimate is even available. This part of the algorithm is naturally parallelizable, because each linearization for i= 0,…,N−1in steps 1-2 can be done independently and therefore in parallel.

Acados has parallelization for this based on OpenMP.
I think it is only supported from the Make build system for now though.


yes indeed.

I’m not an expert in this and would like to do parallelization for the preparation phase in acados.

Can you please tell me where exactly you do parallelization based on OpenMP?

It would be great if you can point me to an example or guidelines to do parallelization for the preparation phase in acados.


For example here for the linearization

It is just about compiling acados with OpenMP, which is only supported by the Make build system for now.
To use it, you will have to modify these lines:


1 Like

Just a quick update, I included the OpenMP based parallelization into the CMake build system.

1 Like