Advanced-step iteration computational time

Hi, I have developed an mpc for quadruped robot (see GitHub - iit-DLSLab/Quadruped-PyMPC: A model predictive controller for quadruped robots based on the single rigid body model and written entirely in python. Gradient-based (acados) or Sampling-based (jax).) using acados. I implemented the change required to run the new advanced step algorithm, but in my case, I noticed that the preparation phase is 3 times slower w.r.t. standard RTI for the AS-RTI-A case.

For normal RTI, in the prep i have 0.0005ms more or less, that becomes 0.0015 to 0.002ms for the AS-RTI-A case (level 1).

Is this expected?


Hi Giulio,

the AS-RTI with an additional level A iteration, compared to the RTI just solves a single QP in the preparation phase. Hence, the expected runtime of the AS-RTI-A preparation phase should be
(preparation phase of RTI + QP solves (= feedback phase time)). The feedback phase is always the same length.

That’s what we had in our experiments, cf. last two lines of Table 1 in
If you use e.g. HPIPM as QP solver, the QP solve time should have little variation.

Can you run e.g. 100 NMPC steps and track the timings to see does this happen? What is the mean, is there big variation?


1 Like


SQP 1 Iter:
mean 0,002765064516
std 0,0005280171042

preparation phase
mean: 0,0008375652174
std 0,00004614091665

feedback phase
mean: 0,002116121739
std: 0,0006833485035

AS-RTI-A: (as_rti_level = 0, as_rti_iter = 1)
preparation phase
mean: 0,002952483871
std: 0,0008974525009

feedback phase
mean: 0,002225225806
std: 0,0008468590087

So in my case, i don’t see to many advantages in using AS-RTI, since the preparation phase will likely post-pone the next feedback phase computation too much.

Plus, is it correct to compare with SQP 1 iter? I see in the paper you linked that the comparison is done w.r.t. SQP with 2 iter. Can you explain to me the motivation behind it?


AS-RTI-A consists of 1 linearization and 2 QP solutions.
So, the computations are more than 1 SQP iteration, but less then 2.
One would expect to get better closed loop performance using AS-RTI-A compared to controller with 1 full SQP iterations and worse compared to a controller with 2 SQP iterations.

AS-RTI A solves 2 QPs with the same left hand side, such that most of the condensing operations only need to be performed once.
In your particular setting, it seems that the QP solver makes up for the majority of the computational cost.

I had a brief look at your code: Quadruped-PyMPC/gradient/nominal/ at 81c2bfc465c7e8a0ab03cdc17e33063bfb481aac · iit-DLSLab/Quadruped-PyMPC · GitHub
It seems that you are not doing condensing at all, since qp_solver_cond_N is not set. See Python Interface — acados documentation
Especially when solving a QP with the same left hand side twice, using condensing makes a lot of sense.
Note that the results in the paper have been obtained with FULL_CONDENSING_QPOASES and would look very similar with FULL_CONDENSING_DAQP.


1 Like