Structured SVM solver using subgradient descent.
Implements a margin rescaled with l1 slack penalty. By default, a constant learning rate is used. It is also possible to use the adaptive learning rate found by AdaGrad.
This class implements online subgradient descent. If n_jobs != 1, small batches of size n_jobs are used to exploit parallel inference. If inference is fast, use n_jobs=1.
Parameters : | model : StructuredModel
|
---|
Attributes
Methods
fit(X, Y[, constraints, warm_start]) | Learn parameters using subgradient descent. |
get_params([deep]) | Get parameters for the estimator |
predict(X) | Predict output on examples in X. |
score(X, Y) | Compute score as 1 - loss over whole data set. |
set_params(**params) | Set the parameters of the estimator. |
Learn parameters using subgradient descent.
Parameters : | X : iterable
Y : iterable
constraints : None
warm_start : boolean, default=False
|
---|
Get parameters for the estimator
Parameters : | deep: boolean, optional :
|
---|---|
Returns : | params : mapping of string to any
|
Predict output on examples in X. Parameters ———- X : iterable
Traing instances. Contains the structured input objects.
Returns : | Y_pred : list
|
---|
Compute score as 1 - loss over whole data set.
Returns the average accuracy (in terms of model.loss) over X and Y.
Parameters : | X : iterable
Y : iterable
|
---|---|
Returns : | score : float
|
Set the parameters of the estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns : | self : |
---|