Scikit-Learn
46
named
MultiTaskLasso
,
trained with a mixed L1, L2-norm for regularisation,
which
estimates sparse coefficients for multiple regression problems jointly. In this the response
y is a 2D array of shape (n_samples, n_tasks).
The
parameters
and the
attributes
for
MultiTaskLasso
are like that of
Lasso
. The only
difference is in the alpha parameter. In Lasso the alpha parameter
is a constant that
multiplies L1 norm, whereas in Multi-task Lasso it is a constant that multiplies the L1/L2
terms.
And, opposite to Lasso, MultiTaskLasso doesn’t have
precompute
attribute.
Implementation Example
Following Python script uses
MultiTaskLasso
linear model which further uses coordinate
descent as the algorithm to fit the coefficients:
from sklearn import linear_model
MTLReg = linear_model.MultiTaskLasso(alpha=0.5)
MTLReg.fit([[0,0], [1, 1], [2, 2]], [[0, 0],[1,1],[2,2]])
Output
MultiTaskLasso(alpha=0.5, copy_X=True, fit_intercept=True, max_iter=1000,
normalize=False, random_state=None, selection='cyclic', tol=0.0001,
warm_start=False)
Now, once fitted, the model can predict new values as follows:
MTLReg.predict([[0,1]])
Output
array([[0.53033009, 0.53033009]])
For the above example, we can get the weight vector with the help of following python
script:
MTLReg.coef_
Output
array([[0.46966991, 0. ],
[0.46966991, 0. ]])
Similarly, we can get the value of intercept with the help of following python script:
MTLReg.intercept_
Output
array([0.53033009, 0.53033009])
Scikit-Learn
47
We can get the total number of iterations to get the specified tolerance with the help of
following python script:
MTLReg.n_iter_
Dostları ilə paylaş: