the AS-RTI with an additional level A iteration, compared to the RTI just solves a single QP in the preparation phase. Hence, the expected runtime of the AS-RTI-A preparation phase should be
(preparation phase of RTI + QP solves (= feedback phase time)). The feedback phase is always the same length.

That’s what we had in our experiments, cf. last two lines of Table 1 in https://arxiv.org/pdf/2403.07101.pdf
If you use e.g. HPIPM as QP solver, the QP solve time should have little variation.

Can you run e.g. 100 NMPC steps and track the timings to see does this happen? What is the mean, is there big variation?

So in my case, i don’t see to many advantages in using AS-RTI, since the preparation phase will likely post-pone the next feedback phase computation too much.

Plus, is it correct to compare with SQP 1 iter? I see in the paper you linked that the comparison is done w.r.t. SQP with 2 iter. Can you explain to me the motivation behind it?

AS-RTI-A consists of 1 linearization and 2 QP solutions.
So, the computations are more than 1 SQP iteration, but less then 2.
One would expect to get better closed loop performance using AS-RTI-A compared to controller with 1 full SQP iterations and worse compared to a controller with 2 SQP iterations.

AS-RTI A solves 2 QPs with the same left hand side, such that most of the condensing operations only need to be performed once.
In your particular setting, it seems that the QP solver makes up for the majority of the computational cost.