Parallel execution in Python Interface for Monte Carlo Simulations

Hi :wave:

I’m using the Python Interface in WSL:Ubuntu. I have a running script but I want to make Monte Carlo Simulations for this by changing a parameter for each simulations.
I have it working without multiprocessing so the simplified structure is: (I think the idea of the implementation is seen in the code snippet below)

I guess this has to do with how the solver is created from the Python interface and my use of load_iterate doesn’t help for parallel processes.
Instead of loading the iterate I tried to create the solver in the SingleSimulation function with: AcadosOcpSolver(…). However, the problem then is that all parallel processes try to generate the code in the same folder so it crashes in the building stage.

Isn’t this supported or is there a way to make this work in the Python interface?

import multiprocessing as mp

def SingleSimulation(solver, integrator, parameter):
        solver_obj = solver
        solver_obj.load_iterate("original_iterate.json")
        ....
        for k in range(N+1)
           solver_obj.set(k, "p", parameter)
         ....
        solver_obj.solve_for_x0(x_current)
        integrator.simulate(x_current, u_current, p=parameter)
        ...
        return x_sim, u_sim, time_sim

ocp_solver = AcadosOcpSolver(ocp, json_file = 'acados_ocp', simulink_opts=simulink_opts)
ocp_solver.store_iterate("original_iterate.json", True)
acados_integrator = AcadosSimSolver(ocp, json_file = 'acados_sim.json')

multiprocess_f = lambda parameter: SingleSimulation(ocp_solver, acados_integrator, parameter)
different_parameters = [1, 2, 3, 4, 5, 6]

def multiprocess_f_wrapper(parameter):
        result = {}
        x_sim, u_sim, time_sim = multiprocess_f(parameter)
        result['x'] = x_sim
        result['u'] = u_sim
        result['time'] = time_sim
        result['parameter'] = parameter
        with open('result' + str(parameter) + '.pickle', 'wb') as f:
            pickle.dump(result, f)

# This hangs and seems to never finish:
pool = mp.Pool(processes=6)
pool.map(multiprocess_f_wrapper, different_parameters)
pool.close() 
pool.join() 

# This works:
for param in parameters:
        multiprocess_f_wrapper(param)

As I started looking into parallelizing directly in c using the generated c code I found and understood the arguments ‘build’ and ‘generate’ in the AcadosOcpSolver(…)

So for anyone finding this thread, my analysis and the solution I used are from the code above.
Change:

solver_obj = solver
solver_obj.load_iterate("original_iterate.json")

to

solver_obj = AcadosOcpSolver(ocp, json_file = 'acados_ocp.json', generate=False, build=False)

The code files are generated and compiled in this call before executing it in parallel:

ocp_solver = AcadosOcpSolver(ocp, json_file = 'acados_ocp', simulink_opts=simulink_opts)

So in the SingleSimulation function, a new ocp_solver object is created using AcadosOcpSolver(…) but with the generate and build flags as False, no new code will be generated or compiled.
Using the load_iterate only changes the allocated object’s memory, which in my first post would have the problem that all parallel processes share the same object.