Gradient term in Linear LS cost function

Hi everyone,

thanks for the amazing work and for providing the very accessible Python interface!

I am working on an MPC problem with a linear LS cost with a gradient term, i.e. 0.5*||w-w_ref||^2_W + q.T*(w-w_ref), with w = [x;u]. I need the gradient as the reference is characterized by active inequality constraints (the linear LS cost approximates an economic cost function). However this option seems to be unsupported yet. Also I did not find a way to around this problem by using slacks and the corresponding linear penalty term.

For now I can use the “external cost” module, but this deprives me of using the Gauss-Newton Hessian approximation, which is a shame. I am overlooking some other option?

If not I would be very grateful if the gradient option would be added to the linear LS cost module!

Best regards,
Jochem

Hi Jochem,

IMO it is a bit of an unusual LS formulation this one, where there is also a gradient term. Is it common?

One way to use the current formulation, if your W is invertible, is to use as reference w_ref_new = w_ref - inv(W)*q. Would that work for you?

If not, I would maybe be more useful/generic to add a new cost type that is simply penalizing quadraticaly and linearly w, leaving to the user the responsibility of setting up hessian and gradient. Would that work for you?

Cheers,

Gianluca

Hi Gianluca,

Thanks for the fast reply!

The gradient term is common in the case the w_ref is the result of a steady-state economic optimization problem. In this case, there are often active constraints at the optimal steady-state and the gradient of the economic cost function is not zero. A tracking MPC scheme that incorporates this gradient will typically perform better (economically speaking) in the neighborhood of w_ref than one that does not (not including the gradient makes the constraints weakly active).

Regarding the first option, I agree it could work. But only if W is well-conditioned, which I cannot guarantee in my more specific case. W is here computed automatically to locally approximate the economic cost function, is not diagonal and often has a relatively large condition number.

In order to avoid numerical problems from the start I would therefore prefer the second option, if possible.
I thought it could be a special case of the general cost function
||y - y_ref||^2_W + q.T*(y - y_ref) but I agree it might be confusing to call it “least squares”

Cheers,
Jochem

Hi Gianluca,

I tested the option of using w_ref_new = w_ref - inv(W)*q for several different examples and did not come close to having any numerical issues. Also, if W would be so ill-conditioned that it’s inverse distorts the gradient one might not want to use it anyway…

Thanks for the help!

Cheers,
Jochem

1 Like

@jdeschut another point, in case of linear LS, the Gauss-Newton Hessian approximation is not an approximation, but it coincides with the exact Hessian (of the cost - not dealing with dynamics here), so you should get the exact same result though the “external cost” module. (The same would apply to the newcoming cost module implementing quadraticaly and linearly penalized w). Or am I overlooking something?

Cheers,

Gianluca

Hi Gianluca,

You are definitely right, but I would expect that for LINEAR_LS, the “Gauss-Newton” option also omits the contribution of the constraints, whereas for the EXTERNAL option, they would be always present, no?

Best,
Jochem

Yes this only applies to the cost term.

Different flags can be used to trigger the exact/approximate hessian contribution from each of cost, dynamics and constraints individually and allow any combination.

Follow-up question: is this the only way, as of now, of adding a linear term to an LS formulation? In ACADO there was the possibility of adding a linear term to the LS formulation: acado: OCP Class Reference

Is there anything like it in acados? Or the external_cost module is recommended in this case?