pystruct.learners.OneSlackSSVM

class pystruct.learners.OneSlackSSVM(model, max_iter=100, C=1.0, check_constraints=True, verbose=0, positive_constraint=None, n_jobs=1, break_on_bad=False, show_loss_every=0, tol=1e-05, inference_cache=0, inactive_threshold=1e-10, inactive_window=50, logger=None, cache_tol='auto', switch_to=None)

Structured SVM solver for the 1-slack QP with l1 slack penalty.

Implements margin rescaled structural SVM using the 1-slack formulation and cutting plane method, solved using CVXOPT. The optimization is restarted in each iteration.
Parameters :

model : StructuredModel

Object containing the model structure. Has to implement loss, inference and loss_augmented_inference.

max_iter : int

Maximum number of passes over dataset to find constraints.

C : float (default=1)

Regularization parameter

check_constraints : bool

Whether to check if the new “most violated constraint” is more violated than previous constraints. Helpful for stopping and debugging, but costly.

verbose : int

Verbosity

positive_constraint: list of ints

Indices of parmeters that are constraint to be positive.

break_on_bad: bool (default=False)

Whether to break (start debug mode) when inference was approximate.

n_jobs : int, default=1

Number of parallel jobs for inference. -1 means as many as cpus.

show_loss_every : int, default=0

Controlls how often the hamming loss is computed (for monitoring purposes). Zero means never, otherwise it will be computed very show_loss_every’th epoch.

tol : float, default=1e-5

Convergence tolerance. If dual objective decreases less than tol, learning is stopped. The default corresponds to ignoring the behavior of the dual objective and stop only if no more constraints can be found.

inference_cache : int, default=0

How many results of loss_augmented_inference to cache per sample. If > 0 the most violating of the cached examples will be used to construct a global constraint. Only if this constraint is not violated, inference will be run again. This parameter poses a memory / computation tradeoff. Storing more constraints might lead to RAM being exhausted. Using inference_cache > 0 is only advisable if computation time is dominated by inference.

cache_tol : float, default=None

Tolerance when to reject a constraint from cache (and do inference). If None, tol will be used. Higher values might lead to faster learning.

inactive_threshold : float, default=1e-5

Threshold for dual variable of a constraint to be considered inactive.

inactive_window : float, default=50

Window for measuring inactivity. If a constraint is inactive for inactive_window iterations, it will be pruned from the QP. If set to 0, no constraints will be removed.

switch_to : None or string, default=None

Switch to the given inference method if the previous method does not find any more constraints.

Attributes

Methods

fit(X, Y[, constraints, warm_start]) Learn parameters using cutting plane method.
get_params([deep]) Get parameters for the estimator
predict(X) Predict output on examples in X.
prune_constraints(constraints, a)
score(X, Y) Compute score as 1 - loss over whole data set.
set_params(**params) Set the parameters of the estimator.
__init__(model, max_iter=100, C=1.0, check_constraints=True, verbose=0, positive_constraint=None, n_jobs=1, break_on_bad=False, show_loss_every=0, tol=1e-05, inference_cache=0, inactive_threshold=1e-10, inactive_window=50, logger=None, cache_tol='auto', switch_to=None)
fit(X, Y, constraints=None, warm_start=False)

Learn parameters using cutting plane method.

Parameters :

X : iterable

Traing instances. Contains the structured input objects. No requirement on the particular form of entries of X is made.

Y : iterable

Training labels. Contains the strctured labels for inputs in X. Needs to have the same length as X.

contraints : ignored

warm_start : bool, default=False

Whether we are warmstarting from a previous fit.

get_params(deep=True)

Get parameters for the estimator

Parameters :

deep: boolean, optional :

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns :

params : mapping of string to any

Parameter names mapped to their values.

predict(X)

Predict output on examples in X. Parameters ———- X : iterable

Traing instances. Contains the structured input objects.
Returns :

Y_pred : list

List of inference results for X using the learned parameters.

score(X, Y)

Compute score as 1 - loss over whole data set.

Returns the average accuracy (in terms of model.loss) over X and Y.

Parameters :

X : iterable

Evaluation data.

Y : iterable

True labels.

Returns :

score : float

Average of 1 - loss over training examples.

set_params(**params)

Set the parameters of the estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The former have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns :self :