- PyBrain - Discussion
- PyBrain - Useful Resources
- PyBrain - Quick Guide
- PyBrain - Examples
- PyBrain - API & Tools
- PyBrain - Reinforcement Learning Module
- PyBrain - Connections
- PyBrain - Layers
- Training Network Using Optimization Algorithms
- PyBrain - Working with Recurrent Networks
- Working with Feed-Forward Networks
- PyBrain - Testing Network
- PyBrain - Training Datasets on Networks
- PyBrain - Importing Data For Datasets
- PyBrain - Datasets Types
- PyBrain - Working with Datasets
- PyBrain - Working with Networks
- PyBrain - Introduction to PyBrain Networks
- PyBrain - Environment Setup
- PyBrain - Overview
- PyBrain - Home
Selected Reading
- Who is Who
- Computer Glossary
- HR Interview Questions
- Effective Resume Writing
- Questions and Answers
- UPSC IAS Exams Notes
PyBrain - Layers
Layers are basically a set of functions that are used on hidden layers of a network.
We will go through the following details about layers in this chapter −
Understanding layer
Creating Layer using Pybrain
Understanding layers
We have seen examples earper where we have used layers as follows −
TanhLayer
SoftmaxLayer
Example using TanhLayer
Below is one example where we have used TanhLayer for building a network −
testnetwork.py
from pybrain.tools.shortcuts import buildNetwork from pybrain.structure import TanhLayer from pybrain.datasets import SupervisedDataSet from pybrain.supervised.trainers import BackpropTrainer # Create a network with two inputs, three hidden, and one output nn = buildNetwork(2, 3, 1, bias=True, hiddenclass=TanhLayer) # Create a dataset that matches network input and output sizes: norgate = SupervisedDataSet(2, 1) # Create a dataset to be used for testing. nortrain = SupervisedDataSet(2, 1) # Add input and target values to dataset # Values for NOR truth table norgate.addSample((0, 0), (1,)) norgate.addSample((0, 1), (0,)) norgate.addSample((1, 0), (0,)) norgate.addSample((1, 1), (0,)) # Add input and target values to dataset # Values for NOR truth table nortrain.addSample((0, 0), (1,)) nortrain.addSample((0, 1), (0,)) nortrain.addSample((1, 0), (0,)) nortrain.addSample((1, 1), (0,)) #Training the network with dataset norgate. trainer = BackpropTrainer(nn, norgate) # will run the loop 1000 times to train it. for epoch in range(1000): trainer.train() trainer.testOnData(dataset=nortrain, verbose = True)
Output
The output for the above code is as follows −
python testnetwork.py
C:pybrainpybrainsrc>python testnetwork.py Testing on data: ( out: , [0.887 ] ) ( correct: , [1 ] ) error: 0.00637334 ( out: , [0.149 ] ) ( correct: , [0 ] ) error: 0.01110338 ( out: , [0.102 ] ) ( correct: , [0 ] ) error: 0.00522736 ( out: , [-0.163] ) ( correct: , [0 ] ) error: 0.01328650 ( All errors: , [0.006373344564625953, 0.01110338071737218, 0.005227359234093431, 0.01328649974219942]) ( Average error: , 0.008997646064572746) ( Max error: , 0.01328649974219942, Median error: , 0.01110338071737218)
Example using SoftMaxLayer
Below is one example where we have used SoftmaxLayer for building a network −
from pybrain.tools.shortcuts import buildNetwork from pybrain.structure.modules import SoftmaxLayer from pybrain.datasets import SupervisedDataSet from pybrain.supervised.trainers import BackpropTrainer # Create a network with two inputs, three hidden, and one output nn = buildNetwork(2, 3, 1, bias=True, hiddenclass=SoftmaxLayer) # Create a dataset that matches network input and output sizes: norgate = SupervisedDataSet(2, 1) # Create a dataset to be used for testing. nortrain = SupervisedDataSet(2, 1) # Add input and target values to dataset # Values for NOR truth table norgate.addSample((0, 0), (1,)) norgate.addSample((0, 1), (0,)) norgate.addSample((1, 0), (0,)) norgate.addSample((1, 1), (0,)) # Add input and target values to dataset # Values for NOR truth table nortrain.addSample((0, 0), (1,)) nortrain.addSample((0, 1), (0,)) nortrain.addSample((1, 0), (0,)) nortrain.addSample((1, 1), (0,)) #Training the network with dataset norgate. trainer = BackpropTrainer(nn, norgate) # will run the loop 1000 times to train it. for epoch in range(1000): trainer.train() trainer.testOnData(dataset=nortrain, verbose = True)
Output
The output is as follows −
C:pybrainpybrainsrc>python example16.py Testing on data: ( out: , [0.918 ] ) ( correct: , [1 ] ) error: 0.00333524 ( out: , [0.082 ] ) ( correct: , [0 ] ) error: 0.00333484 ( out: , [0.078 ] ) ( correct: , [0 ] ) error: 0.00303433 ( out: , [-0.082] ) ( correct: , [0 ] ) error: 0.00340005 ( All errors: , [0.0033352368788838365, 0.003334842961037291, 0.003034328685718761, 0.0034000458892589056]) ( Average error: , 0.0032761136037246985) ( Max error: , 0.0034000458892589056, Median error: , 0.0033352368788838365)
Creating Layer in Pybrain
In Pybrain, you can create your own layer as follows −
To create a layer, you need to use NeuronLayer class as the base class to create all type of layers.
Example
from pybrain.structure.modules.neuronlayer import NeuronLayer class LinearLayer(NeuronLayer): def _forwardImplementation(self, inbuf, outbuf): outbuf[:] = inbuf def _backwardImplementation(self, outerr, inerr, outbuf, inbuf): inerr[:] = outer
To create a Layer, we need to implement two methods: _forwardImplementation() and _backwardImplementation().
The _forwardImplementation() takes in 2 arguments inbuf and outbuf, which are Scipy arrays. Its size is dependent on the layers’ input and output dimensions.
The _backwardImplementation() is used to calculate the derivative of the output with respect to the input given.
So to implement a layer in Pybrain, this is the skeleton of the layer class −
from pybrain.structure.modules.neuronlayer import NeuronLayer class NewLayer(NeuronLayer): def _forwardImplementation(self, inbuf, outbuf): pass def _backwardImplementation(self, outerr, inerr, outbuf, inbuf): pass
In case you want to implement a quadratic polynomial function as a layer, we can do so as follows −
Consider we have a polynomial function as −
f(x) = 3x2
The derivative of the above polynomial function will be as follows −
f(x) = 6 x
The final layer class for the above polynomial function will be as follows −
testlayer.py
from pybrain.structure.modules.neuronlayer import NeuronLayer class PolynomialLayer(NeuronLayer): def _forwardImplementation(self, inbuf, outbuf): outbuf[:] = 3*inbuf**2 def _backwardImplementation(self, outerr, inerr, outbuf, inbuf): inerr[:] = 6*inbuf*outerr
Now let us make use of the layer created as shown below −
testlayer1.py
from testlayer import PolynomialLayer from pybrain.tools.shortcuts import buildNetwork from pybrain.tests.helpers import gradientCheck n = buildNetwork(2, 3, 1, hiddenclass=PolynomialLayer) n.randomize() gradientCheck(n)
GradientCheck() will test whether the layer is working fine or not.We need to pass the network where the layer is used to gradientCheck(n).It will give the output as “Perfect Gradient” if the layer is working fine.
Output
C:pybrainpybrainsrc>python testlayer1.py Perfect gradientAdvertisements