|
![](/i/favi32.png) Scikit-Learnlbfgs
: For multiclass problems, it handles
multinomial loss. It also handles only L2
penalty.
sagasklearnlbfgs
: For multiclass problems, it handles
multinomial loss. It also handles only L2
penalty.
saga
: It is a good choice for large datasets.
For multiclass problems, it also handles
multinomial loss. Along with L1 penalty, it
also supports ‘elasticnet’ penalty.
sag
: It is also used for large datasets. For
multiclass problems, it also handles
multinomial loss.
max_iter
: int, optional, default = 100
As name suggest, it represents the maximum
number of iterations taken for solvers to converge.
multi_class
:
str, {‘ovr’, ‘multinomial’,
‘auto’}, optional,
default = ‘ovr’
ovr
: For this option, a binary problem is fit
for each label.
multimonial
: For this option, the loss
minimized is the multinomial loss fit across
the entire probability distribution. We can’t
use this option if solver = ‘liblinear’.
auto
: This option will select ‘ovr’ if solver =
‘liblinear’ or data is binary, else it will
choose ‘multinomial’.
verbose
: int, optional, default = 0
By default, the value of this parameter is 0 but for
liblinear and lbfgs solver we should set verbose to
any positive number.
warm_start
: bool, optional, default = false
With this parameter set to True, we can reuse the
solution of the previous call to fit as initialization.
If we choose default i.e. false, it will erase the
previous solution.
n_jobs
: int or None, optional, default =
None
If multi_class = ‘ovr’, this parameter represents
the number of CPU cores used when parallelizing
over classes. It is ignored when solver = ‘liblinear’.
l1_ratio
: float or None, optional, default =
None
It is used in case when penalty = ‘elasticnet’. It is
basically the Elastic-Net mixing parameter with 0
< = l1_ratio < = 1.
Dostları ilə paylaş: |
|
|