- Microsoft Cognitive Toolkit - Discussion
- Microsoft Cognitive Toolkit - Resources
- Microsoft Cognitive Toolkit - Quick Guide
- CNTK - Recurrent Neural Network
- CNTK - Convolutional Neural Network
- CNTK - Monitoring the Model
- CNTK - Out-of-Memory Datasets
- CNTK - Regression Model
- CNTK - Classification Model
- CNTK - Neural Network Regression
- Neural Network Binary Classification
- Neural Network Classification
- CNTK - Measuring Performance
- CNTK - In-Memory and Large Datasets
- CNTK - Training the Neural Network
- CNTK - Creating First Neural Network
- CNTK - Neural Network (NN) Concepts
- CNTK - Logistic Regression Model
- CNTK - Sequence Classification
- CPU and GPU
- Getting Started
- Introduction
- Home
Selected Reading
- Who is Who
- Computer Glossary
- HR Interview Questions
- Effective Resume Writing
- Questions and Answers
- UPSC IAS Exams Notes
CNTK - Neural Network Classification
In this chapter, we will study how to classify neural network by using CNTK.
Introduction
Classification may be defined as the process to predict categorial output labels or responses for the given input data. The categorised output, which will be based on what the model has learned in training phase, can have the form such as "Black" or "White" or "spam" or "no spam".
On the other hand, mathematically, it is the task of approximating a mapping function say f from input variables say X to the output variables say Y.
A classic example of classification problem can be the spam detection in e-mails. It is obvious that there can be only two categories of output, "spam" and "no spam".
To implement such classification, we first need to do training of the classifier where "spam" and "no spam" emails would be used as the training data. Once, the classifier trained successfully, it can be used to detect an unknown email.
Here, we are going to create a 4-5-3 NN using iris flower dataset having the following −
4-input nodes (one for each predictor value).
5-hidden processing nodes.
3-output nodes (because there are three possible species in iris dataset).
Loading Dataset
We will be using iris flower dataset, from which we want to classify species of iris flowers based on the physical properties of sepal width and length, and petal width and length. The dataset describes the physical properties of different varieties of iris flowers −
Sepal length
Sepal width
Petal length
Petal width
Class i.e. iris setosa or iris versicolor or iris virginica
We have iris.CSV file which we used before in previous chapters also. It can be loaded with the help of Pandas pbrary. But, before using it or loading it for our classifier, we need to prepare the training and test files, so that it can be used easily with CNTK.
Preparing training & test files
Iris dataset is one of the most popular datasets for ML projects. It has 150 data items and the raw data looks as follows −
5.1 3.5 1.4 0.2 setosa 4.9 3.0 1.4 0.2 setosa … 7.0 3.2 4.7 1.4 versicolor 6.4 3.2 4.5 1.5 versicolor … 6.3 3.3 6.0 2.5 virginica 5.8 2.7 5.1 1.9 virginica
As told earper, the first four values on each pne describes the physical properties of different varieties, i.e. Sepal length, Sepal width, Petal length, Petal width of iris flowers.
But, we should have to convert the data in the format, that can be easily used by CNTK and that format is .ctf file (we created one iris.ctf in previous section also). It will look pke as follows −
|attribs 5.1 3.5 1.4 0.2|species 1 0 0 |attribs 4.9 3.0 1.4 0.2|species 1 0 0 … |attribs 7.0 3.2 4.7 1.4|species 0 1 0 |attribs 6.4 3.2 4.5 1.5|species 0 1 0 … |attribs 6.3 3.3 6.0 2.5|species 0 0 1 |attribs 5.8 2.7 5.1 1.9|species 0 0 1
In the above data, the |attribs tag mark the start of the feature value and the |species tags the class label values. We can also use any other tag names of our wish, even we can add item ID as well. For example, look at the following data −
|ID 001 |attribs 5.1 3.5 1.4 0.2|species 1 0 0 |#setosa |ID 002 |attribs 4.9 3.0 1.4 0.2|species 1 0 0 |#setosa … |ID 051 |attribs 7.0 3.2 4.7 1.4|species 0 1 0 |#versicolor |ID 052 |attribs 6.4 3.2 4.5 1.5|species 0 1 0 |#versicolor …
There are total 150 data items in iris dataset and for this example, we will be using 80-20 hold-out dataset rule i.e. 80% (120 items) data items for training purpose and remaining 20% (30 items) data items for testing purpose.
Constructing Classification model
First, we need to process the data files in CNTK format and for that we are going to use the helper function named create_reader as follows −
def create_reader(path, input_dim, output_dim, rnd_order, sweeps): x_strm = C.io.StreamDef(field= attribs , shape=input_dim, is_sparse=False) y_strm = C.io.StreamDef(field= species , shape=output_dim, is_sparse=False) streams = C.io.StreamDefs(x_src=x_strm, y_src=y_strm) deserial = C.io.CTFDeseriapzer(path, streams) mb_src = C.io.MinibatchSource(deserial, randomize=rnd_order, max_sweeps=sweeps) return mb_src
Now, we need to set the architecture arguments for our NN and also provide the location of the data files. It can be done with the help of following python code −
def main(): print("Using CNTK version = " + str(C.__version__) + " ") input_dim = 4 hidden_dim = 5 output_dim = 3 train_file = ".\...\" #provide the name of the training file(120 data items) test_file = ".\...\" #provide the name of the test file(30 data items)
Now, with the help of following code pne our program will create the untrained NN −
X = C.ops.input_variable(input_dim, np.float32) Y = C.ops.input_variable(output_dim, np.float32) with C.layers.default_options(init=C.initiapzer.uniform(scale=0.01, seed=1)): hLayer = C.layers.Dense(hidden_dim, activation=C.ops.tanh, name= hidLayer )(X) oLayer = C.layers.Dense(output_dim, activation=None, name= outLayer )(hLayer) nnet = oLayer model = C.ops.softmax(nnet)
Now, once we created the dual untrained model, we need to set up a Learner algorithm object and afterwards use it to create a Trainer training object. We are going to use SGD learner and cross_entropy_with_softmax loss function −
tr_loss = C.cross_entropy_with_softmax(nnet, Y) tr_clas = C.classification_error(nnet, Y) max_iter = 2000 batch_size = 10 learn_rate = 0.01 learner = C.sgd(nnet.parameters, learn_rate) trainer = C.Trainer(nnet, (tr_loss, tr_clas), [learner])
Code the learning algorithm as follows −
max_iter = 2000 batch_size = 10 lr_schedule = C.learning_parameter_schedule_per_sample([(1000, 0.05), (1, 0.01)]) mom_sch = C.momentum_schedule([(100, 0.99), (0, 0.95)], batch_size) learner = C.fsadagrad(nnet.parameters, lr=lr_schedule, momentum=mom_sch) trainer = C.Trainer(nnet, (tr_loss, tr_clas), [learner])
Now, once we finished with Trainer object, we need to create a reader function to read the training data−
rdr = create_reader(train_file, input_dim, output_dim, rnd_order=True, sweeps=C.io.INFINITELY_REPEAT) iris_input_map = { X : rdr.streams.x_src, Y : rdr.streams.y_src }
Now it’s time to train our NN model−
for i in range(0, max_iter): curr_batch = rdr.next_minibatch(batch_size, input_map=iris_input_map) trainer.train_minibatch(curr_batch) if i % 500 == 0: mcee = trainer.previous_minibatch_loss_average macc = (1.0 - trainer.previous_minibatch_evaluation_average) * 100 print("batch %4d: mean loss = %0.4f, accuracy = %0.2f%% " % (i, mcee, macc))
Once, we have done with training, let’s evaluate the model using test data items −
print(" Evaluating test data ") rdr = create_reader(test_file, input_dim, output_dim, rnd_order=False, sweeps=1) iris_input_map = { X : rdr.streams.x_src, Y : rdr.streams.y_src } num_test = 30 all_test = rdr.next_minibatch(num_test, input_map=iris_input_map) acc = (1.0 - trainer.test_minibatch(all_test)) * 100 print("Classification accuracy = %0.2f%%" % acc)
After evaluating the accuracy of our trained NN model, we will be using it for making a prediction on unseen data −
np.set_printoptions(precision = 1, suppress=True) unknown = np.array([[6.4, 3.2, 4.5, 1.5]], dtype=np.float32) print(" Predicting Iris species for input features: ") print(unknown[0]) pred_prob = model.eval(unknown) np.set_printoptions(precision = 4, suppress=True) print("Prediction probabipties are: ") print(pred_prob[0])
Complete Classification Model
Import numpy as np Import cntk as C def create_reader(path, input_dim, output_dim, rnd_order, sweeps): x_strm = C.io.StreamDef(field= attribs , shape=input_dim, is_sparse=False) y_strm = C.io.StreamDef(field= species , shape=output_dim, is_sparse=False) streams = C.io.StreamDefs(x_src=x_strm, y_src=y_strm) deserial = C.io.CTFDeseriapzer(path, streams) mb_src = C.io.MinibatchSource(deserial, randomize=rnd_order, max_sweeps=sweeps) return mb_src def main(): print("Using CNTK version = " + str(C.__version__) + " ") input_dim = 4 hidden_dim = 5 output_dim = 3 train_file = ".\...\" #provide the name of the training file(120 data items) test_file = ".\...\" #provide the name of the test file(30 data items) X = C.ops.input_variable(input_dim, np.float32) Y = C.ops.input_variable(output_dim, np.float32) with C.layers.default_options(init=C.initiapzer.uniform(scale=0.01, seed=1)): hLayer = C.layers.Dense(hidden_dim, activation=C.ops.tanh, name= hidLayer )(X) oLayer = C.layers.Dense(output_dim, activation=None, name= outLayer )(hLayer) nnet = oLayer model = C.ops.softmax(nnet) tr_loss = C.cross_entropy_with_softmax(nnet, Y) tr_clas = C.classification_error(nnet, Y) max_iter = 2000 batch_size = 10 learn_rate = 0.01 learner = C.sgd(nnet.parameters, learn_rate) trainer = C.Trainer(nnet, (tr_loss, tr_clas), [learner]) max_iter = 2000 batch_size = 10 lr_schedule = C.learning_parameter_schedule_per_sample([(1000, 0.05), (1, 0.01)]) mom_sch = C.momentum_schedule([(100, 0.99), (0, 0.95)], batch_size) learner = C.fsadagrad(nnet.parameters, lr=lr_schedule, momentum=mom_sch) trainer = C.Trainer(nnet, (tr_loss, tr_clas), [learner]) rdr = create_reader(train_file, input_dim, output_dim, rnd_order=True, sweeps=C.io.INFINITELY_REPEAT) iris_input_map = { X : rdr.streams.x_src, Y : rdr.streams.y_src } for i in range(0, max_iter): curr_batch = rdr.next_minibatch(batch_size, input_map=iris_input_map) trainer.train_minibatch(curr_batch) if i % 500 == 0: mcee = trainer.previous_minibatch_loss_average macc = (1.0 - trainer.previous_minibatch_evaluation_average) * 100 print("batch %4d: mean loss = %0.4f, accuracy = %0.2f%% " % (i, mcee, macc)) print(" Evaluating test data ") rdr = create_reader(test_file, input_dim, output_dim, rnd_order=False, sweeps=1) iris_input_map = { X : rdr.streams.x_src, Y : rdr.streams.y_src } num_test = 30 all_test = rdr.next_minibatch(num_test, input_map=iris_input_map) acc = (1.0 - trainer.test_minibatch(all_test)) * 100 print("Classification accuracy = %0.2f%%" % acc) np.set_printoptions(precision = 1, suppress=True) unknown = np.array([[7.0, 3.2, 4.7, 1.4]], dtype=np.float32) print(" Predicting species for input features: ") print(unknown[0]) pred_prob = model.eval(unknown) np.set_printoptions(precision = 4, suppress=True) print("Prediction probabipties: ") print(pred_prob[0]) if __name__== ”__main__”: main()
Output
Using CNTK version = 2.7 batch 0: mean loss = 1.0986, mean accuracy = 40.00% batch 500: mean loss = 0.6677, mean accuracy = 80.00% batch 1000: mean loss = 0.5332, mean accuracy = 70.00% batch 1500: mean loss = 0.2408, mean accuracy = 100.00% Evaluating test data Classification accuracy = 94.58% Predicting species for input features: [7.0 3.2 4.7 1.4] Prediction probabipties: [0.0847 0.736 0.113]
Saving the trained model
This Iris dataset has only 150 data items, hence it would take only a few seconds to train the NN classifier model, but training on a large dataset having hundred or thousand data items can take hours or even days.
We can save our model so that, we won’t have to retain it from scratch. With the help of following Python code, we can save our trained NN −
nn_classifier = “.\neuralclassifier.model” #provide the name of the file model.save(nn_classifier, format=C.ModelFormat.CNTKv2)
Following are the arguments of save() function used above −
File name is the first argument of save() function. It can also be write along with the path of file.
Another parameter is the format parameter which has a default value C.ModelFormat.CNTKv2.
Loading the trained model
Once you saved the trained model, it’s very easy to load that model. We only need to use the load () function. Let’s check this in the following example −
import numpy as np import cntk as C model = C.ops.functions.Function.load(“.\neuralclassifier.model”) np.set_printoptions(precision = 1, suppress=True) unknown = np.array([[7.0, 3.2, 4.7, 1.4]], dtype=np.float32) print(" Predicting species for input features: ") print(unknown[0]) pred_prob = model.eval(unknown) np.set_printoptions(precision = 4, suppress=True) print("Prediction probabipties: ") print(pred_prob[0])
The benefit of saved model is that, once you load a saved model, it can be used exactly as if the model had just been trained.
Advertisements