I am using acados python along with Gaussian Processes (GPs) to implement a GP-MPC. In my implementation, a prior dynamic model will be available, and GPs will take care of the discrepancies between the actual dynamics and the prior model. The prior model and GP part are casadi expressions and will be used as the acados model (code for setting up the [model] [optimization problem] [solver]).
Currently, I am testing the implementation with a small batch of data points (60 data points) in cart-pole stabilization tasks. I am having an issue with setting up the solver (calling AcadosOcpSolver(ocp)); it takes a lot more time (~35 s) than running the simulation. I notice that it usually spends most of the time when I see these terminal print-outs:
Do I understand correctly, that you mean the time it takes to compile the solver?
If so, it is expected that this takes longer, if you use a more complicated model, like a Gaussian process, since the corresponding CasADi expression gets more complicated.
However, you can compile the solver only once and then recreate it using the keyword arguments build and generate, see e.g. examples/acados_python/chain_mass/solution_sensitivity_example.py.
In my case, I am using MPC in a Gymnasium(OpenAI-gym)-like environment. I would need to reset the MPC controller every time after roll-outs or Gaussian process training. Therefore, long compiling time is prohibitive. Controlling the solver building and C-code generation does help! I also managed to improve the solver compiling time by using Gaussian process approximation. Thank you for your suggestion!
I also noticed that during the long compiling time, only 2 CPUs are at their maximum load, and the others are mostly idle. I would guess there is still some potential to accelerate the compilation with parallel processing. Do you think it is reasonable to request such a feature?