Non-Negativity Constraints
What are my options for enforcing some of the coefficients to be non-negative?
When using linear models from the scikit-learn
library, we can specify positivity constraints.
However, this forces all coefficients to be non-negative.
Recently, I discovered a trick to relax this constraint for certain coefficients.
Before talking about the trick, let's explore the other alternatives.
We can call lsq_linear
function from scipy
,
which allows us to define arbitrary lower and upper bounds. So it is very flexible in that sense.
However we don't have the regularization component, the objective function and the constraints are in the following form:
minimize 0.5 * ||A x - b||**2
subject to lb <= x <= ub
Of course, we can go even further and use the minimize
function to optimize an arbitrary objective.
This allows us to add regularization terms, and since the function accepts bounds so we can achieve our initial goal.
The Trick
For every coefficient where you want to relax the non-negativity constraint, simply introduce the negative of the corresponding feature vector. Let w1 and w2 be the coefficients corresponding to the original feature and its negative, respectively. The final result we report will be w1 - w2.