- Scikit Learn - Discussion
- Scikit Learn - Useful Resources
- Scikit Learn - Quick Guide
- Dimensionality Reduction using PCA
- Clustering Performance Evaluation
- Scikit Learn - Clustering Methods
- Scikit Learn - Boosting Methods
- Randomized Decision Trees
- Scikit Learn - Decision Trees
- Classification with Naïve Bayes
- Scikit Learn - KNN Learning
- Scikit Learn - K-Nearest Neighbors
- Scikit Learn - Anomaly Detection
- Scikit Learn - Support Vector Machines
- Stochastic Gradient Descent
- Scikit Learn - Extended Linear Modeling
- Scikit Learn - Linear Modeling
- Scikit Learn - Conventions
- Scikit Learn - Estimator API
- Scikit Learn - Data Representation
- Scikit Learn - Modelling Process
- Scikit Learn - Introduction
- Scikit Learn - Home
Selected Reading
- Computer Glossary
- HR Interview Questions
- Effective Resume Writing
- Questions and Answers
- UPSC IAS Exams Notes
选读
Scikit Learn - Anomaly Detection
Here, we will learn about what is anomaly detection in Sklearn and how it is used in identification of the data points.
Anomaly detection is a technique used to identify data points in dataset that does not fit well with the rest of the data. It has many apppcations in business such as fraud detection, intrusion detection, system health monitoring, surveillance, and predictive maintenance. Anomapes, which are also called outper, can be spanided into following three categories −
Point anomapes − It occurs when an inspanidual data instance is considered as anomalous w.r.t the rest of the data.
Contextual anomapes − Such kind of anomaly is context specific. It occurs if a data instance is anomalous in a specific context.
Collective anomapes − It occurs when a collection of related data instances is anomalous w.r.t entire dataset rather than inspanidual values.
Methods
Two methods namely outper detection and novelty detection can be used for anomaly detection. It’s necessary to see the distinction between them.
Outper detection
The training data contains outpers that are far from the rest of the data. Such outpers are defined as observations. That’s the reason, outper detection estimators always try to fit the region having most concentrated training data while ignoring the deviant observations. It is also known as unsupervised anomaly detection.
Novelty detection
It is concerned with detecting an unobserved pattern in new observations which is not included in training data. Here, the training data is not polluted by the outpers. It is also known as semi-supervised anomaly detection.
There are set of ML tools, provided by scikit-learn, which can be used for both outper detection as well novelty detection. These tools first implementing object learning from the data in an unsupervised by using fit () method as follows −
estimator.fit(X_train)
Now, the new observations would be sorted as inpers (labeled 1) or outpers (labeled -1) by using predict() method as follows −
estimator.fit(X_test)
The estimator will first compute the raw scoring function and then predict method will make use of threshold on that raw scoring function. We can access this raw scoring function with the help of score_sample method and can control the threshold by contamination parameter.
We can also define decision_function method that defines outpers as negative value and inpers as non-negative value.
estimator.decision_function(X_test)
Sklearn algorithms for Outper Detection
Let us begin by understanding what an elpptic envelop is.
Fitting an elpptic envelop
This algorithm assume that regular data comes from a known distribution such as Gaussian distribution. For outper detection, Scikit-learn provides an object named covariance.ElppticEnvelop.
This object fits a robust covariance estimate to the data, and thus, fits an elppse to the central data points. It ignores the points outside the central mode.
Parameters
Following table consist the parameters used by sklearn. covariance.ElppticEnvelop method −
Sr.No | Parameter & Description |
---|---|
1 |
store_precision − Boolean, optional, default = True We can specify it if the estimated precision is stored. |
2 |
assume_centered − Boolean, optional, default = False If we set it False, it will compute the robust location and covariance directly with the help of FastMCD algorithm. On the other hand, if set True, it will compute the support of robust location and covarian. |
3 |
support_fraction − float in (0., 1.), optional, default = None This parameter tells the method that how much proportion of points to be included in the support of the raw MCD estimates. |
4 |
contamination − float in (0., 1.), optional, default = 0.1 It provides the proportion of the outpers in the data set. |
5 |
random_state − int, RandomState instance or None, optional, default = none This parameter represents the seed of the pseudo random number generated which is used while shuffpng the data. Followings are the options − int − In this case, random_state is the seed used by random number generator. RandomState instance − In this case, random_state is the random number generator. None − In this case, the random number generator is the RandonState instance used by np.random. |
Attributes
Following table consist the attributes used by sklearn. covariance.ElppticEnvelop method −
Sr.No | Attributes & Description |
---|---|
1 |
support_ − array-pke, shape(n_samples,) It represents the mask of the observations used to compute robust estimates of location and shape. |
2 |
location_ − array-pke, shape (n_features) It returns the estimated robust location. |
3 |
covariance_ − array-pke, shape (n_features, n_features) It returns the estimated robust covariance matrix. |
4 |
precision_ − array-pke, shape (n_features, n_features) It returns the estimated pseudo inverse matrix. |
5 |
offset_ − float It is used to define the decision function from the raw scores. decision_function = score_samples -offset_ |
Implementation Example
import numpy as np^M from sklearn.covariance import ElppticEnvelope^M true_cov = np.array([[.5, .6],[.6, .4]]) X = np.random.RandomState(0).multivariate_normal(mean = [0, 0], cov=true_cov,size=500) cov = ElppticEnvelope(random_state = 0).fit(X)^M # Now we can use predict method. It will return 1 for an inper and -1 for an outper. cov.predict([[0, 0],[2, 2]])
Output
array([ 1, -1])
Isolation Forest
In case of high-dimensional dataset, one efficient way for outper detection is to use random forests. The scikit-learn provides ensemble.IsolationForest method that isolates the observations by randomly selecting a feature. Afterwards, it randomly selects a value between the maximum and minimum values of the selected features.
Here, the number of spptting needed to isolate a sample is equivalent to path length from the root node to the terminating node.
Parameters
Followings table consist the parameters used by sklearn. ensemble.IsolationForest method −
Sr.No | Parameter & Description |
---|---|
1 |
n_estimators − int, optional, default = 100 It represents the number of base estimators in the ensemble. |
2 |
max_samples − int or float, optional, default = “auto” It represents the number of samples to be drawn from X to train each base estimator. If we choose int as its value, it will draw max_samples samples. If we choose float as its value, it will draw max_samples ∗ ?.shape[0] samples. And, if we choose auto as its value, it will draw max_samples = min(256,n_samples). |
3 |
support_fraction − float in (0., 1.), optional, default = None This parameter tells the method that how much proportion of points to be included in the support of the raw MCD estimates. |
4 |
contamination − auto or float, optional, default = auto It provides the proportion of the outpers in the data set. If we set it default i.e. auto, it will determine the threshold as in the original paper. If set to float, the range of contamination will be in the range of [0,0.5]. |
5 |
random_state − int, RandomState instance or None, optional, default = none This parameter represents the seed of the pseudo random number generated which is used while shuffpng the data. Followings are the options − int − In this case, random_state is the seed used by random number generator. RandomState instance − In this case, random_state is the random number generator. None − In this case, the random number generator is the RandonState instance used by np.random. |
6 |
max_features − int or float, optional (default = 1.0) It represents the number of features to be drawn from X to train each base estimator. If we choose int as its value, it will draw max_features features. If we choose float as its value, it will draw max_features * X.shape[?] samples. |
7 | bootstrap − Boolean, optional (default = False) Its default option is False which means the samppng would be performed without replacement. And on the other hand, if set to True, means inspanidual trees are fit on a random subset of the training data sampled with replacement. |
8 |
n_jobs − int or None, optional (default = None) It represents the number of jobs to be run in parallel for fit() and predict() methods both. |
9 |
verbose − int, optional (default = 0) This parameter controls the verbosity of the tree building process. |
10 |
warm_start − Bool, optional (default=False) If warm_start = true, we can reuse previous calls solution to fit and can add more estimators to the ensemble. But if is set to false, we need to fit a whole new forest. |
Attributes
Following table consist the attributes used by sklearn. ensemble.IsolationForest method −
Sr.No | Attributes & Description |
---|---|
1 |
estimators_ − pst of DecisionTreeClassifier Providing the collection of all fitted sub-estimators. |
2 |
max_samples_ − integer It provides the actual number of samples used. |
3 |
offset_ − float It is used to define the decision function from the raw scores. decision_function = score_samples -offset_ |
Implementation Example
The Python script below will use sklearn. ensemble.IsolationForest method to fit 10 trees on given data
from sklearn.ensemble import IsolationForest import numpy as np X = np.array([[-1, -2], [-3, -3], [-3, -4], [0, 0], [-50, 60]]) OUTDClf = IsolationForest(n_estimators = 10) OUTDclf.fit(X)
Output
IsolationForest( behaviour = old , bootstrap = False, contamination= legacy , max_features = 1.0, max_samples = auto , n_estimators = 10, n_jobs=None, random_state = None, verbose = 0 )
Local Outper Factor
Local Outper Factor (LOF) algorithm is another efficient algorithm to perform outper detection on high dimension data. The scikit-learn provides neighbors.LocalOutperFactor method that computes a score, called local outper factor, reflecting the degree of anomapty of the observations. The main logic of this algorithm is to detect the samples that have a substantially lower density than its neighbors. Thats why it measures the local density deviation of given data points w.r.t. their neighbors.
Parameters
Followings table consist the parameters used by sklearn. neighbors.LocalOutperFactor method
Sr.No | Parameter & Description |
---|---|
1 |
n_neighbors − int, optional, default = 20 It represents the number of neighbors use by default for kneighbors query. All samples would be used if . |
2 |
algorithm − optional Which algorithm to be used for computing nearest neighbors. If you choose ball_tree, it will use BallTree algorithm. If you choose kd_tree, it will use KDTree algorithm. If you choose brute, it will use brute-force search algorithm. If you choose auto, it will decide the most appropriate algorithm on the basis of the value we passed to fit() method. |
3 |
leaf_size − int, optional, default = 30 The value of this parameter can affect the speed of the construction and query. It also affects the memory required to store the tree. This parameter is passed to BallTree or KdTree algorithms. |
4 |
contamination − auto or float, optional, default = auto It provides the proportion of the outpers in the data set. If we set it default i.e. auto, it will determine the threshold as in the original paper. If set to float, the range of contamination will be in the range of [0,0.5]. |
5 |
metric − string or callable, default It represents the metric used for distance computation. |
6 |
P − int, optional (default = 2) It is the parameter for the Minkowski metric. P=1 is equivalent to using manhattan_distance i.e. L1, whereas P=2 is equivalent to using eucpdean_distance i.e. L2. |
7 |
novelty − Boolean, (default = False) By default, LOF algorithm is used for outper detection but it can be used for novelty detection if we set novelty = true. |
8 |
n_jobs − int or None, optional (default = None) It represents the number of jobs to be run in parallel for fit() and predict() methods both. |
Attributes
Following table consist the attributes used by sklearn.neighbors.LocalOutperFactor method −
Sr.No | Attributes & Description |
---|---|
1 |
negative_outper_factor_ − numpy array, shape(n_samples,) Providing opposite LOF of the training samples. |
2 |
n_neighbors_ − integer It provides the actual number of neighbors used for neighbors queries. |
3 |
offset_ − float It is used to define the binary labels from the raw scores. |
Implementation Example
The Python script given below will use sklearn.neighbors.LocalOutperFactor method to construct NeighborsClassifier class from any array corresponding our data set
from sklearn.neighbors import NearestNeighbors samples = [[0., 0., 0.], [0., .5, 0.], [1., 1., .5]] LOFneigh = NearestNeighbors(n_neighbors = 1, algorithm = "ball_tree",p=1) LOFneigh.fit(samples)
Output
NearestNeighbors( algorithm = ball_tree , leaf_size = 30, metric= minkowski , metric_params = None, n_jobs = None, n_neighbors = 1, p = 1, radius = 1.0 )
Example
Now, we can ask from this constructed classifier is the closet point to [0.5, 1., 1.5] by using the following python script −
print(neigh.kneighbors([[.5, 1., 1.5]])
Output
(array([[1.7]]), array([[1]], dtype = int64))
One-Class SVM
The One-Class SVM, introduced by Schölkopf et al., is the unsupervised Outper Detection. It is also very efficient in high-dimensional data and estimates the support of a high-dimensional distribution. It is implemented in the Support Vector Machines module in the Sklearn.svm.OneClassSVM object. For defining a frontier, it requires a kernel (mostly used is RBF) and a scalar parameter.
For better understanding let s fit our data with svm.OneClassSVM object −
Example
from sklearn.svm import OneClassSVM X = [[0], [0.89], [0.90], [0.91], [1]] OSVMclf = OneClassSVM(gamma = scale ).fit(X)
Now, we can get the score_samples for input data as follows −
OSVMclf.score_samples(X)
Output
array([1.12218594, 1.58645126, 1.58673086, 1.58645127, 1.55713767])Advertisements