Infeasibility issues when using hard nonlinear constraints

Hi :wave:

I’m currently looking to implement a hard nonlinear constraint that limits the position of an entity in 2d space, see figure below.
The objective is to move from [x1,x2]=[0,0] to [4,4]. The progression is shown using black dots, from which the predictions (in grey) are drawn. The predictions nicely avoid the circle. However, when the current position is close to the circular constraint (depicted by a red dot), the problem becomes unfeasible (HPIPM: QP error status 3).
I wonder why this happens. Does it have to to with the non-convexity of the feasible set, or with the way the circle is linearized?

Also, I’m able to avoid infeasibility by making the constraint soft using slack variables. I do however have to select a very specific value for these slacks as it is really sensitive: value too high=Infeasible solution at iteration 0, see figure below.

This makes me question my understanding of soft constraints, which is that violation of a soft constraint would/should be penalized by a sufficiently large (but without upper limit) weight. Could you maybe elaborate on this?

Any kind of help is sincerely appreciated!

Using the Acados python interface, I created a minimal example to reproduce the problems mentioned:

from acados_template import AcadosOcp, AcadosOcpSolver, AcadosSimSolver, AcadosModel
import numpy as np
import scipy.linalg
from casadi import SX, vertcat
import matplotlib.pyplot as plt

def export_ode_model():
    model_name = 'minimalsystem'

    # set up states & controls
    x1      = SX.sym('x1')
    x2      = SX.sym('x2')

    F1 = SX.sym('F1')
    F2 = SX.sym('F2')
    u = vertcat(F1,F2)

    # xdot
    x1_dot      = SX.sym('x1_dot')
    x2_dot      = SX.sym('x2_dot')

    x = vertcat(x1, x1_dot, x2, x2_dot)
    xdot = SX.sym('xdot',x.size()[0],1)

    # dynamics
    f_expl = vertcat(x1_dot,

    f_impl = xdot - f_expl

    model = AcadosModel()
    model.f_impl_expr = f_impl
    model.f_expl_expr = f_expl
    model.x = x
    model.xdot = xdot
    model.u = u = model_name

    return model

def main():
    # create ocp object to formulate the OCP
    ocp = AcadosOcp()

    # set model
    model = export_ode_model()
    ocp.model = model

    Tf = 1.5
    nx = model.x.size()[0]
    nu = model.u.size()[0]
    ny = nx + nu
    ny_e = nx
    N_horizon = 10
    Fmax = 2
    setpoint = np.array([4,4])     
    x0 = np.array([0, 0.0, 0.0, 0.0])

    # set dimensions
    ocp.dims.N = N_horizon

    # set cost module
    ocp.cost.cost_type = 'LINEAR_LS'
    ocp.cost.cost_type_e = 'LINEAR_LS'

    Q_mat = np.diag([1e3, 1e0, 1e3, 1e0])
    R_mat = np.diag([1e-2, 1e-2])

    ocp.cost.W = scipy.linalg.block_diag(Q_mat, R_mat)
    ocp.cost.W_e = Q_mat

    ocp.cost.Vx = np.zeros((ny, nx))
    ocp.cost.Vx[:nx,:nx] = np.eye(nx)

    Vu = np.zeros((ny, nu))
    ocp.cost.Vu = Vu

    ocp.cost.Vx_e = np.eye(nx)

    ocp.cost.yref  = np.array([setpoint[0],0,setpoint[1],0,0,0]) #np.zeros((ny, ))
    ocp.cost.yref_e = np.array([setpoint[0],0,setpoint[1],0]) #np.zeros((ny_e, ))

    # linear state constraints
    ocp.constraints.constr_type = 'BGH'
    # ocp.constraints.constr_type = 'BGP'
    ocp.constraints.lbu = np.array([-Fmax,-Fmax])
    ocp.constraints.ubu = np.array([+Fmax,+Fmax])
    ocp.constraints.x0 = x0
    ocp.constraints.idxbu = np.array([0, 1])

    # non-linear (BGH) state constraint: circle
    ocp.model.con_h_expr = (model.x[0]-2)**2 + (model.x[2]-2)**2  # x1, x2
    ocp.constraints.lh = np.array([1**2])       # radius
    ocp.constraints.uh = np.array([10e3])       
    # slack variable configuration:
    # nsh = 1
    # ocp.constraints.lsh = np.zeros(nsh)             # Lower bounds on slacks corresponding to soft lower bounds for nonlinear constraints
    # ocp.constraints.ush = np.zeros(nsh)             # Lower bounds on slacks corresponding to soft upper bounds for nonlinear constraints
    # ocp.constraints.idxsh = np.array(range(nsh))    # Jsh
    # ns = 1
    # ocp.cost.zl = 10e5 * np.ones((ns,)) # gradient wrt lower slack at intermediate shooting nodes (1 to N-1)
    # ocp.cost.Zl = 1 * np.ones((ns,))    # diagonal of Hessian wrt lower slack at intermediate shooting nodes (1 to N-1)
    # ocp.cost.zu = 0 * np.ones((ns,))    
    # ocp.cost.Zu = 1 * np.ones((ns,))  

    # default solver params
    ocp.solver_options.hessian_approx = 'GAUSS_NEWTON'
    ocp.solver_options.integrator_type = 'ERK'
    ocp.solver_options.nlp_solver_type = 'SQP'
    ocp.solver_options.qp_solver_cond_N = N_horizon

    # set prediction horizon = Tf

    solver_json = 'acados_ocp_' + + '.json'
    acados_ocp_solver = AcadosOcpSolver(ocp, json_file = solver_json)

    # create an integrator with the same settings as used in the OCP solver.
    acados_integrator = AcadosSimSolver(ocp, json_file = solver_json)

    Nsim = 200
    simX = np.ndarray((Nsim+1, nx))
    simU = np.ndarray((Nsim, nu))

    simX[0,:] = x0
    xy_predictions = np.zeros((N_horizon,2))    

    # initialize figure
    fig0 = plt.figure()
    ax1 = fig0.add_subplot(1,1,1)
    circle = plt.Circle((2, 2), radius=1, color='blue',alpha=0.1,label='constraint')
    ax1.plot(setpoint[0],setpoint[1], 'go',label='target')
    ax1.add_patch(circle); ax1.set_xlabel('x1'); ax1.set_ylabel('x2'); ax1.grid()

    # closed loop
    runOnce = True
    for i in range(Nsim):
        # solve ocp and get next control input
            simU[i,:] = acados_ocp_solver.solve_for_x0(x0_bar = simX[i, :])
            ax1.plot(simX[i,0],simX[i,2],'ro',label='infeasible x0')
        # extract solution state info
        for j in range(N_horizon):
            xy_predictions[j,0] = acados_ocp_solver.get(j, "x")[0] #x1
            xy_predictions[j,1] = acados_ocp_solver.get(j, "x")[2] #x2
        # update figure with predictions
         # put a breakpoint here to F5 and plot through the for-loop.

        # simulate system
        simX[i+1, :] = acados_integrator.simulate(x=simX[i, :], u=simU[i,:])

    # plot results

if __name__ == '__main__':


I tried you code and I think what you reported above is reasonable. In practice, you should always slack some constraints to avoid solver failure, and the penalty coefficient of slackness should be well tuned.

It’s true that using too large zl, zu makes the problem numerically ill conditioned (think about the condition number of the approximated hessian). Generally speaking, we can not expect the solver to perform well with really large penalty coefficients.

My experience is, first use a relatively small zu and zl and see if the solver give reasonable solution. And then check the values of slack variables, make sure they’re all zeros when your problem is obviously feasible. Finally, increase zu, zl a little bit (maybe by one magnitude) and see what happens.

However, when the current position is close to the circular constraint (depicted by a red dot), the problem becomes unfeasible (HPIPM: QP error status 3).

By my experience, when you get QP error status 3, it doesn’t always mean you problem is infeasible. In most cases, it’s just because the problem is ill-conditioned. As far as I know, there could be 2 cases:

  1. Non-semipositive definite hessian. It usually happens when the cost is not least square and exact hessian is used. In this case, maybe try to formulate least-square cost function and use Guass-Newton, or add a little regularization.
  2. The hessian has a large condition number. In this case you should tune the cost weights, including the weights of slack penalties.

I wonder why this happens. Does it have to to with the non-convexity of the feasible set, or with the way the circle is linearized?

I think anyway solving this non-convex problem could be hard. I would recommend this paper since it provides an interesting way to formulate the collision avoidance problem, probably in a convex way.

Hope it would be helpful! I’m also happy if Jonathan @FreyJo could tell me if my understanding is correct.

Thanks for taking the time to look at my question and for providing an elaborate answer!

I would like to look into the condition number of the hessian some more. @FreyJo , do you think this could be the cause for this problem. And if so, how could I investigate the condition number in Acados?

One thing that is peculiar is that when using the qpoases solver (and increasing the maximum amount of iterations to 300), I get error code 37 in roughly the same position. This error code refers to an initially infeasible QP. The difference in solve method (interior point method or active set) leading to an error in the same position makes me question the ill-conditioning statement…


Sorry, I am quite busy these days.

That is a really nice example and I would look into it more when I have more time.
Indeed it is not trivial to guess good values for the slack penalties, but it makes a lot of sense to look at the units and your other costs.

In general qpOASES, really needs a lot more QP iterations compared to IPMs. But in your example 300 should get you through all possibile active sets.

@FenglongSong thanks a lot for replying here. I think what you write makes sense.

I think there is one more common pitfall that can lead to infeasible QPs, which is that although the OCP NLP is feasible, the QP is infeasible due to the linearization. However, this should be mitigated by the soft constraints, and it should just make the cost value very big, which can of course cause some numerical difficulties, but I would have expected that qpOASES can cope with it.

@Boudewijn can you maybe share the example variant with only soft constraints (and control bounds) for which qpOASES fails?
I would look at it later.
Also, would it be fine to add the example you posted in the acados repository?

Regarding this, I implemented a functionality ocp_solver.dump_last_qp_to_json.
In there, you have the full linearization of the OCP.
I guess you can check the conditioning of the Hessian blocks with that (or call the functions that are used in there).


1 Like

Hi Jonathan,

thanks a lot for the nice explanation. I’m wondering if you think it’s reasonable to add preconditioning or scaling in acados to improve the numerical stability, since I found that IPOPT (here) and OSQP (part 5 of this paper) both mentioned scaling or preconditioning for better robustness.

As discussed here, I guess sometimes the not-well-scaled problem can also lead to QP solver failure. Although the modeler should try to avoid formulations where some non-zero entries in the gradients are very small or large, having auto scaling might be helpful. Do you think this would be valuable to explore? I want to look into this a bit if you think so.


Hi Fenglong,

Surely this could be valuable to explore.
acados is really focused on performance, thus, an additional scaling layer was not a focus (yet).
I am aware of the preconditioning in OSQP, which you can use as a QP solver in acados.

I think it could be valuable to explore, but should be optional to allow using acados without the associated overhead.
Not sure what would be a good way to implement it.


1 Like