Really strange behavior for seemingly simple problem

Hi :wave:

I’m working on what should be a pretty straightforward and simple problem in Matlab.
I want to generate 2D (x,y) acceleration commands to reach a certain position.
I have constraints on the magnitude of the velocity and acceleration.
That’s it. Like I said, should be pretty simple…
The solver fails after reaching the max amount of iterations.
However, when I set ‘levenberg_marquardt’ to extremely high values (100000), a solution is found (after a whopping 10,000+ iterations!).
I’m looking for any advice as to where I went wrong.
Thanks in advance!

The model:



function model = test_model()

import casadi.*

%% system dimensions
nx = 4;
nu = 2;

%% named symbolic variables
pos = SX.sym('pos',2);
vel = SX.sym('vel',2);
acc = SX.sym('acc',2);

%% (unnamed) symbolic variables
sym_x = vertcat(pos,vel);

sym_xdot = SX.sym('xdot', nx, 1);
sym_u = acc;

%% dynamics
expr_f_expl = vertcat(vel, acc);
expr_f_impl = expr_f_expl - sym_xdot;

%% constraints
expr_h = [vel(1)*vel(1)+vel(2)*vel(2) , acc(1)*acc(1)+acc(2)*acc(2)]'; % Magnitude of vel and acc squared

%% cost
W_x = diag([100, 100,0,0]); % Weight is only on position, no weight on velocity
W_u = diag([1, 1]);

P_des = [100, -50]'; % Desired Position
V_des = [0, 0]'; % Desired velocity, but no weight is applied to it

X_des = vertcat(P_des, V_des);

expr_ext_cost_e = (X_des - sym_x)'* W_x * (X_des - sym_x);
expr_ext_cost = expr_ext_cost_e + 0.5 * sym_u' * W_u * sym_u;

W = blkdiag(W_x, W_u);

model = acados_ocp_model();

%% populate structure
model.model_struct.name = 'Test';
model.model_struct.nx = nx;
model.model_struct.nu = nu;
model.model_struct.sym_x = sym_x;
model.model_struct.sym_xdot = sym_xdot;
model.model_struct.sym_u = sym_u;
model.model_struct.expr_f_expl = expr_f_expl;
model.model_struct.expr_f_impl = expr_f_impl;
model.model_struct.constr_expr_h = expr_h;
model.model_struct.constr_expr_h_0 = expr_h;
% model.model_struct.constr_expr_h_e = vel(1)*vel(1)+vel(2)*vel(2);
model.model_struct.cost_expr_ext_cost_0 = expr_ext_cost;

model.model_struct.constr_type = 'auto';

model.model_struct.cost_expr_ext_cost = expr_ext_cost;
model.model_struct.cost_expr_ext_cost_e = expr_ext_cost_e;
model.model_struct.cost_W= W;
model.model_struct.cost_W_e= W_x;
end

The solver:

clc;
clear;
close all;

check_acados_requirements()

%% discretization
N = 50;
T = 5;

% Initial conditions
Xi_0 = [0,0]; % Initial position
Vi_0 = [-4,6];  % Initial velocity
x0 = horzcat(Xi_0, Vi_0);
u0 = [0,0]; % Initial acc command

nlp_solver = 'sqp'; % sqp, sqp_rti

% integrator type
sim_method = 'erk'; % erk = explicit Runge Kutta, irk = implicit Runge Kutta, irk_gnsf

%% model dynamics
ocp_model = test_model;
nx = ocp_model.model_struct.nx;
nu = ocp_model.model_struct.nu;

max_velocity_squared_xy = 100;
max_acc_squared_xy = 9;

ocp_model.model_struct.constr_lh = [-1,-1]'; % Not 0 because then 0 initial conditions are problematic
ocp_model.model_struct.constr_uh = [max_velocity_squared_xy,max_acc_squared_xy]';

ocp_model.model_struct.constr_lh_0 = [-1,-1]'; % Not 0 because then 0 initial conditions are problematic
ocp_model.model_struct.constr_uh_0 = [max_velocity_squared_xy,max_acc_squared_xy]';

ocp_model.set('constr_x0', x0);

% dynamics
ocp_model.set('dyn_type', 'explicit');
ocp_model.set('dyn_expr_f', ocp_model.model_struct.expr_f_expl);

ocp_model.model_struct.T = T;

%% acados ocp set opts
ocp_opts = acados_ocp_opts();
ocp_opts.set('sim_method', sim_method);
ocp_opts.set('param_scheme_N', N);
ocp_opts.set('nlp_solver', nlp_solver);
ocp_opts.set('nlp_solver_max_iter', 30000);
ocp_opts.set('globalization', 'merit_backtracking')

ocp_opts.set('levenberg_marquardt', 100000);

ocp = acados_ocp(ocp_model, ocp_opts);
x_traj_init = repmat(x0', 1,N+1);
u_traj_init = repmat(u0', 1,N);

%% call ocp solver

% set trajectory initialization
ocp.set('init_x', x_traj_init);
ocp.set('init_u', u_traj_init);

Xi(:,1) = Xi_0;
x(:,1) = x0';

ocp.solve();
ocp.print('stat')

switch ocp.get('status')
    case 0
        ocp.print('stat')
    case 1
        error('Failed: Detected NAN');
    case 2
        error('Failed: Maximum number of iterations reached');
    case 3
        error('Failed: Minimum step size in QP solver reached');
    case 4
        error('Failed: QP solver failed');
end

% get solution
u_result = ocp.get('u');
x_result = ocp.get('x');

acc_norm= zeros(1,size(u_result,2));

position(:,1) = Xi_0;
Vi(:,1) = Vi_0;
Vi_norm(:,1) = norm(Vi_0);

for j = 1:1:size(u_result,2)

    acc_norm(j) = norm(u_result(:,j));
    position(1:2,j) = x_result(1:2,j);
    Vi(1:2,j) = x_result(3:4,j);
    Vi_norm(j) = norm(Vi(:,j));

end

%% Plots
figure;
plot(position(1,:),position(2,:));
title('Position XY');
legend('Inteceptor Position');
grid on;
xlabel('X [m]');
ylabel('Y [m]');

figure;
dt = T/N;
time = dt:dt:T;
plot(time, acc_norm);
title('Acceleration XY Magnitude');
ylabel('Acceleration XY [m/sec^2]')
xlabel('Time [sec]')
grid on;

figure;
plot(time, Vi_norm);
title('Velocity XY Magnitude');
ylabel('Velocity XY [m/sec]')
xlabel('Time [sec]')
grid on;

Hi,

I reimplemented your problem in Python and checked the convergence.

For me the problem in your formulation was the lower bound of the squared velocity and acceleration constraints. Then, the QP solvers had problems in solving the problems since the resulting QPs seem to be infeasible. If you set the lower bound of
ocp_model.model_struct.constr_lh = [-1,-1] to something like [-1e8, -1e8], then it solves in 20 iteration. If it does not work for you, I can also share my python code.

Cheers,
David

2 Likes

Hi @arisc and David,

I tried to look into this issue and aligned the settings and formulation to give the same result in Matlab and Python.
Now it converges within 30 SQP iterations, using the original lower bounds for h, which were -1.

I set up this repository:

There I first added both of your files and then did the changes, to keep track of them.

Best,
Jonathan

2 Likes

Hi David and Jonathon,

Thanks for taking a look at this.
Jonathon, I ran the Matlab code with your suggestion (after removing the 0.5 scaling for u term), and it indeed converges in 30 iterations.
However, it still seems very sensitive to the initial conditions. If the initial conditions are slightly changed, the solver fails to converge. This leads me to believe that there is a deeper problem here that needs to be investigated.

If you’d like to see what I mean, try changing any of these initial conditions:

  • Xi_0 = [0,0] → Xi_0 = [-10,0]

  • Vi_0 = [-4,6] → Vi_0 = [-3,6]

  • u0 = [0,0] → u0 = [-1,1]

  • The solver also fails if T = 5 → T = 10

Thanks!

Hi,

I looked a bit into the modified settings:

  • Xi_0 = [-10,0], this works with solver_options.globalization = 'MERIT_BACKTRACKING'
    but fails with other globalization options.
    With FUNNEL_L1PEN_LINESEARCH , I get Linesearch: Step size gets too small. Should enter penalty phase. .
    So we could use this test problem when working on that @david0oo.

  • Vi_0 = [-3, 6] works after setting qp_solver_iter_max = 100, otherwise HPIPM runs into maximum iterations.
    I think we should increase the default.

  • u0 = [-1,1]: This works fine, only requires 25 SQP iterations.

  • T = 10 works with FULL_CONDENSING_DAQP, but fails with PARTIAL_CONDENSING_HPIPM after the QP solver returns max_iter in the first QP.

I pushed all the modifications of the settings in the python code in the repository.

I think this is a nice problem for testing of globalization/regularization and QP solvers.

Best,
Jonathan

Hi,

thanks for pointing out the different initializations to us. This helped in finding a bug in our options. I encountered that PARTIAL_CONDENSING_HPIPM gives errors for globalization FUNNEL_L1PEN_LINESEARCH.
If you use FULL_CONDENSING_HPIPM or FULL_CONDENSING_DAQP, then everything should work well. As Jonathan pointed out: setting the max iterations for QP solver higher often helps.

I hope this helps.

David

Hi @arisc

I did some more investigation on this.
Since the problem you actually want to solve is convex, we should not need any regularization.
The issue with using two sided nonlinear constraints h is that the multipliers of the lower bound on h can be positive for intermediate SQP iterates.
There are two solutions:

  1. in Python you can formulate convex-over-nonlinear constraints, where the lower bound multiplier is neglected in the Hessian computation. The lower bounds on these formulations is however not very clean in acados right now and we want to clean this up.
    In this commit, you find an acados Python implementation that can solve your problem with modifications 0, 1, 2, 3.
    Supporting such constraints in Matlab is something that will be supported soon as well.

  2. General support for one sided constraints would mitigate this issue in your case as well, since the constraints in your problem are convex-over-linear.
    This is also something we have on the roadmap.

Best,
Jonathan