Formulate linear multiclass SVM in C-S style in CRF framework.
Inputs x are simply feature arrays, labels y are 0 to n_classes.
Parameters : | n_features : int
n_classes : int, default=2
class_weight : None, or array-like
rescale_C : bool, default=False
|
---|
Notes
No bias / intercept is learned. It is recommended to add a constant one feature to the data.
It is also highly recommended to use n_jobs=1 in the learner when using this model. Trying to parallelize the trivial inference will slow the infernce down a lot!
Methods
batch_inference(X, w[, relaxed]) | |
batch_loss(Y, Y_hat) | |
batch_loss_augmented_inference(X, Y, w[, ...]) | |
batch_psi(X, Y[, Y_true]) | |
continuous_loss(y, y_hat) | |
inference(x, w[, relaxed, return_energy]) | Inference for x using parameters w. |
loss(y, y_hat) | |
loss_augmented_inference(x, y, w[, relaxed, ...]) | Loss-augmented inference for x and y using parameters w. |
max_loss(y) | |
psi(x, y[, y_true]) | Compute joint feature vector of x and y. |
Inference for x using parameters w.
Finds armin_y np.dot(w, psi(x, y)), i.e. best possible prediction.
For an unstructured multi-class model (this model), this can easily done by enumerating all possible y.
Parameters : | x : ndarray, shape (n_features,)
w : ndarray, shape=(size_psi,)
relaxed : ignored |
---|---|
Returns : | y_pred : int
|
Loss-augmented inference for x and y using parameters w.
Minimizes over y_hat: np.dot(psi(x, y_hat), w) + loss(y, y_hat)
Parameters : | x : ndarray, shape (n_features,)
y : int
w : ndarray, shape (size_psi,)
|
---|---|
Returns : | y_hat : int
|
Compute joint feature vector of x and y.
Feature representation psi, such that the energy of the configuration (x, y) and a weight vector w is given by np.dot(w, psi(x, y)).
Parameters : | x : nd-array, shape=(n_features,)
y : int
y_true : int
|
---|---|
Returns : | p : ndarray, shape (size_psi,)
|