KMeansSMOTE

class imbens.sampler.KMeansSMOTE(*, sampling_strategy='auto', random_state=None, k_neighbors=2, n_jobs=None, kmeans_estimator=None, cluster_balance_threshold='auto', density_exponent='auto')

Apply a KMeans clustering before to over-sample using SMOTE.

This is an implementation of the algorithm described in [1].

Read more in the User Guide.

Parameters:
sampling_strategyfloat, str, dict or callable, default=’auto’

Sampling information to resample the data set.

  • When float, it corresponds to the desired ratio of the number of samples in the minority class over the number of samples in the majority class after resampling. Therefore, the ratio is expressed as \(\alpha_{os} = N_{rm} / N_{M}\) where \(N_{rm}\) is the number of samples in the minority class after resampling and \(N_{M}\) is the number of samples in the majority class.

    Warning

    float is only available for binary classification. An error is raised for multi-class classification.

  • When str, specify the class targeted by the resampling. The number of samples in the different classes will be equalized. Possible choices are:

    'minority': resample only the minority class;

    'not minority': resample all classes but the minority class;

    'not majority': resample all classes but the majority class;

    'all': resample all classes;

    'auto': equivalent to 'not majority'.

  • When dict, the keys correspond to the targeted classes. The values correspond to the desired number of samples for each targeted class.

  • When callable, function taking y and returns a dict. The keys correspond to the targeted classes. The values correspond to the desired number of samples for each class.

random_stateint, RandomState instance, default=None

Control the randomization of the algorithm.

  • If int, random_state is the seed used by the random number generator;

  • If RandomState instance, random_state is the random number generator;

  • If None, the random number generator is the RandomState instance used by np.random.

k_neighborsint or object, default=2

If int, number of nearest neighbours to used to construct synthetic samples. If object, an estimator that inherits from KNeighborsMixin that will be used to find the k_neighbors.

n_jobsint, default=None

Number of CPU cores used during the cross-validation loop. None means 1 unless in a joblib.parallel_backend context. -1 means using all processors. See Glossary for more details.

kmeans_estimatorint or object, default=None

A KMeans instance or the number of clusters to be used. By default, we used a MiniBatchKMeans which tend to be better with large number of samples.

cluster_balance_threshold“auto” or float, default=”auto”

The threshold at which a cluster is called balanced and where samples of the class selected for SMOTE will be oversampled. If “auto”, this will be determined by the ratio for each class, or it can be set manually.

density_exponent“auto” or float, default=”auto”

This exponent is used to determine the density of a cluster. Leaving this to “auto” will use a feature-length based exponent.

Attributes:
kmeans_estimator_estimator

The fitted clustering method used before to apply SMOTE.

nn_k_estimator

The fitted k-NN estimator used in SMOTE.

cluster_balance_threshold_float

The threshold used during fit for calling a cluster balanced.

See also

SMOTE

Over-sample using SMOTE.

SVMSMOTE

Over-sample using SVM-SMOTE variant.

BorderlineSMOTE

Over-sample using Borderline-SMOTE variant.

ADASYN

Over-sample using ADASYN.

Notes

See the original papers: [1] for more details.

Supports multi-class resampling. A one-vs.-rest scheme is used.

References

[1] (1,2)

Felix Last, Georgios Douzas, Fernando Bacao, “Oversampling for Imbalanced Learning Based on K-Means and SMOTE” https://arxiv.org/abs/1711.00837

Examples

>>> import numpy as np
>>> from imbens.sampler._over_sampling import KMeansSMOTE
>>> from sklearn.datasets import make_blobs
>>> blobs = [100, 800, 100]
>>> X, y  = make_blobs(blobs, centers=[(-10, 0), (0,0), (10, 0)])
>>> # Add a single 0 sample in the middle blob
>>> X = np.concatenate([X, [[0, 0]]])
>>> y = np.append(y, 0)
>>> # Make this a binary classification problem
>>> y = y == 1
>>> sm = KMeansSMOTE(random_state=42)
>>> X_res, y_res = sm.fit_resample(X, y)
>>> # Find the number of new samples in the middle blob
>>> n_res_in_middle = ((X_res[:, 0] > -5) & (X_res[:, 0] < 5)).sum()
>>> print("Samples in the middle blob: %s" % n_res_in_middle)
Samples in the middle blob: 801
>>> print("Middle blob unchanged: %s" % (n_res_in_middle == blobs[1] + 1))
Middle blob unchanged: True
>>> print("More 0 samples: %s" % ((y_res == 0).sum() > (y == 0).sum()))
More 0 samples: True

Methods

fit(X, y)

Check inputs and statistics of the sampler.

fit_resample(X, y, *[, sample_weight])

Resample the dataset.

get_params([deep])

Get parameters for this estimator.

set_params(**params)

Set the parameters of this estimator.

fit(X, y)

Check inputs and statistics of the sampler.

You should use fit_resample in all cases.

Parameters:
X{array-like, dataframe, sparse matrix} of shape (n_samples, n_features)

Data array.

yarray-like of shape (n_samples,)

Target array.

Returns:
selfobject

Return the instance itself.

fit_resample(X, y, *, sample_weight=None, **kwargs)

Resample the dataset.

Parameters:
X{array-like, dataframe, sparse matrix} of shape (n_samples, n_features)

Matrix containing the data which have to be sampled.

yarray-like of shape (n_samples,)

Corresponding label for each sample in X.

sample_weightarray-like of shape (n_samples,), default=None

Corresponding weight for each sample in X.

  • If None, perform normal resampling and return (X_resampled, y_resampled).

  • If array-like, the given sample_weight will be resampled along with X and y, and the resampled sample weights will be added to returns. The function will return (X_resampled, y_resampled, sample_weight_resampled).

Returns:
X_resampled{array-like, dataframe, sparse matrix} of shape (n_samples_new, n_features)

The array containing the resampled data.

y_resampledarray-like of shape (n_samples_new,)

The corresponding label of X_resampled.

sample_weight_resampledarray-like of shape (n_samples_new,), default=None

The corresponding weight of X_resampled. Only will be returned if input sample_weight is not None.

get_params(deep=True)

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

set_params(**params)

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.