- Keras - Discussion
- Keras - Useful Resources
- Keras - Quick Guide
- Keras - Pre-Trained Models
- Keras - Real Time Prediction using ResNet Model
- Keras - Applications
- Keras - Time Series Prediction using LSTM RNN
- Keras - Regression Prediction using MPL
- Keras - Convolution Neural Network
- Keras - Model Evaluation and Prediction
- Keras - Model Compilation
- Keras - Models
- Keras - Customized Layer
- Keras - Layers
- Keras - Modules
- Keras - Deep learning
- Keras - Overview of Deep learning
- Keras - Backend Configuration
- Keras - Installation
- Keras - Introduction
- Keras - Home
Selected Reading
- Who is Who
- Computer Glossary
- HR Interview Questions
- Effective Resume Writing
- Questions and Answers
- UPSC IAS Exams Notes
Keras - Layers
As learned earper, Keras layers are the primary building block of Keras models. Each layer receives input information, do some computation and finally output the transformed information. The output of one layer will flow into the next layer as its input. Let us learn complete details about layers in this chapter.
Introduction
A Keras layer requires shape of the input (input_shape) to understand the structure of the input data, initiapzer to set the weight for each input and finally activators to transform the output to make it non-pnear. In between, constraints restricts and specify the range in which the weight of input data to be generated and regularizer will try to optimize the layer (and the model) by dynamically applying the penalties on the weights during optimization process.
To summarise, Keras layer requires below minimum details to create a complete layer.
Shape of the input data
Number of neurons / units in the layer
Initiapzers
Regularizers
Constraints
Activations
Let us understand the basic concept in the next chapter. Before understanding the basic concept, let us create a simple Keras layer using Sequential model API to get the idea of how Keras model and layer works.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers from keras import regularizers from keras import constraints model = Sequential() model.add(Dense(32, input_shape=(16,), kernel_initiapzer = he_uniform , kernel_regularizer = None, kernel_constraint = MaxNorm , activation = relu )) model.add(Dense(16, activation = relu )) model.add(Dense(8))
where,
Line 1-5 imports the necessary modules.
Line 7 creates a new model using Sequential API.
Line 9 creates a new Dense layer and add it into the model. Dense is an entry level layer provided by Keras, which accepts the number of neurons or units (32) as its required parameter. If the layer is first layer, then we need to provide Input Shape, (16,) as well. Otherwise, the output of the previous layer will be used as input of the next layer. All other parameters are optional.
First parameter represents the number of units (neurons).
input_shape represent the shape of input data.
kernel_initiapzer represent initiapzer to be used. he_uniform function is set as value.
kernel_regularizer represent regularizer to be used. None is set as value.
kernel_constraint represent constraint to be used. MaxNorm function is set as value.
activation represent activation to be used. relu function is set as value.
Line 10 creates second Dense layer with 16 units and set relu as the activation function.
Line 11 creates final Dense layer with 8 units.
Basic Concept of Layers
Let us understand the basic concept of layer as well as how Keras supports each concept.
Input shape
In machine learning, all type of input data pke text, images or videos will be first converted into array of numbers and then feed into the algorithm. Input numbers may be single dimensional array, two dimensional array (matrix) or multi-dimensional array. We can specify the dimensional information using shape, a tuple of integers. For example, (4,2) represent matrix with four rows and two columns.
>>> import numpy as np >>> shape = (4, 2) >>> input = np.zeros(shape) >>> print(input) [ [0. 0.] [0. 0.] [0. 0.] [0. 0.] ] >>>
Similarly, (3,4,2) three dimensional matrix having three collections of 4x2 matrix (two rows and four columns).
>>> import numpy as np >>> shape = (3, 4, 2) >>> input = np.zeros(shape) >>> print(input) [ [[0. 0.] [0. 0.] [0. 0.] [0. 0.]] [[0. 0.] [0. 0.] [0. 0.] [0. 0.]] [[0. 0.] [0. 0.] [0. 0.] [0. 0.]] ] >>>
To create the first layer of the model (or input layer of the model), shape of the input data should be specified.
Initiapzers
In Machine Learning, weight will be assigned to all input data. Initiapzers module provides different functions to set these initial weight. Some of the Keras Initiapzer function are as follows −
Zeros
Generates 0 for all input data.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.Zeros() model = Sequential() model.add(Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init))
Where, kernel_initiapzer represent the initiapzer for kernel of the model.
Ones
Generates 1 for all input data.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.Ones() model.add(Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init))
Constant
Generates a constant value (say, 5) specified by the user for all input data.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.Constant(value = 0) model.add( Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init) )
where, value represent the constant value
RandomNormal
Generates value using normal distribution of input data.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.RandomNormal(mean=0.0, stddev = 0.05, seed = None) model.add(Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init))
where,
mean represent the mean of the random values to generate
stddev represent the standard deviation of the random values to generate
seed represent the values to generate random number
RandomUniform
Generates value using uniform distribution of input data.
from keras import initiapzers my_init = initiapzers.RandomUniform(minval = -0.05, maxval = 0.05, seed = None) model.add(Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init))
where,
minval represent the lower bound of the random values to generate
maxval represent the upper bound of the random values to generate
TruncatedNormal
Generates value using truncated normal distribution of input data.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.TruncatedNormal(mean = 0.0, stddev = 0.05, seed = None model.add(Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init))
VarianceScapng
Generates value based on the input shape and output shape of the layer along with the specified scale.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.VarianceScapng( scale = 1.0, mode = fan_in , distribution = normal , seed = None) model.add(Dense(512, activation = relu , input_shape = (784,), skernel_initiapzer = my_init))
where,
scale represent the scapng factor
mode represent any one of fan_in, fan_out and fan_avg values
distribution represent either of normal or uniform
VarianceScapng
It finds the stddev value for normal distribution using below formula and then find the weights using normal distribution,
stddev = sqrt(scale / n)
where n represent,
number of input units for mode = fan_in
number of out units for mode = fan_out
average number of input and output units for mode = fan_avg
Similarly, it finds the pmit for uniform distribution using below formula and then find the weights using uniform distribution,
pmit = sqrt(3 * scale / n)
lecun_normal
Generates value using lecun normal distribution of input data.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.RandomUniform(minval = -0.05, maxval = 0.05, seed = None) model.add(Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init))
It finds the stddev using the below formula and then apply normal distribution
stddev = sqrt(1 / fan_in)
where, fan_in represent the number of input units.
lecun_uniform
Generates value using lecun uniform distribution of input data.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.lecun_uniform(seed = None) model.add(Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init))
It finds the pmit using the below formula and then apply uniform distribution
pmit = sqrt(3 / fan_in)
where,
fan_in represents the number of input units
fan_out represents the number of output units
glorot_normal
Generates value using glorot normal distribution of input data.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.glorot_normal(seed=None) model.add( Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init) )
It finds the stddev using the below formula and then apply normal distribution
stddev = sqrt(2 / (fan_in + fan_out))
where,
fan_in represents the number of input units
fan_out represents the number of output units
glorot_uniform
Generates value using glorot uniform distribution of input data.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.glorot_uniform(seed = None) model.add(Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init))
It finds the pmit using the below formula and then apply uniform distribution
pmit = sqrt(6 / (fan_in + fan_out))
where,
fan_in represent the number of input units.
fan_out represents the number of output units
he_normal
Generates value using he normal distribution of input data.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.RandomUniform(minval = -0.05, maxval = 0.05, seed = None) model.add(Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init))
It finds the stddev using the below formula and then apply normal distribution.
stddev = sqrt(2 / fan_in)
where, fan_in represent the number of input units.
he_uniform
Generates value using he uniform distribution of input data.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.he_normal(seed = None) model.add(Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init))
It finds the pmit using the below formula and then apply uniform distribution.
pmit = sqrt(6 / fan_in)
where, fan_in represent the number of input units.
Orthogonal
Generates a random orthogonal matrix.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.Orthogonal(gain = 1.0, seed = None) model.add(Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init))
where, gain represent the multippcation factor of the matrix.
Identity
Generates identity matrix.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.Identity(gain = 1.0) model.add( Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init) )
Constraints
In machine learning, a constraint will be set on the parameter (weight) during optimization phase. <>Constraints module provides different functions to set the constraint on the layer. Some of the constraint functions are as follows.
NonNeg
Constrains weights to be non-negative.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import initiapzers my_init = initiapzers.Identity(gain = 1.0) model.add( Dense(512, activation = relu , input_shape = (784,), kernel_initiapzer = my_init) )
where, kernel_constraint represent the constraint to be used in the layer.
UnitNorm
Constrains weights to be unit norm.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import constraints my_constrain = constraints.UnitNorm(axis = 0) model = Sequential() model.add(Dense(512, activation = relu , input_shape = (784,), kernel_constraint = my_constrain))
MaxNorm
Constrains weight to norm less than or equals to the given value.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import constraints my_constrain = constraints.MaxNorm(max_value = 2, axis = 0) model = Sequential() model.add(Dense(512, activation = relu , input_shape = (784,), kernel_constraint = my_constrain))
where,
max_value represent the upper bound
axis represent the dimension in which the constraint to be appped. e.g. in Shape (2,3,4) axis 0 denotes first dimension, 1 denotes second dimension and 2 denotes third dimension
MinMaxNorm
Constrains weights to be norm between specified minimum and maximum values.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import constraints my_constrain = constraints.MinMaxNorm(min_value = 0.0, max_value = 1.0, rate = 1.0, axis = 0) model = Sequential() model.add(Dense(512, activation = relu , input_shape = (784,), kernel_constraint = my_constrain))
where, rate represent the rate at which the weight constrain is appped.
Regularizers
In machine learning, regularizers are used in the optimization phase. It apppes some penalties on the layer parameter during optimization. Keras regularization module provides below functions to set penalties on the layer. Regularization apppes per-layer basis only.
L1 Regularizer
It provides L1 based regularization.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import regularizers my_regularizer = regularizers.l1(0.) model = Sequential() model.add(Dense(512, activation = relu , input_shape = (784,), kernel_regularizer = my_regularizer))
where, kernel_regularizer represent the rate at which the weight constrain is appped.
L2 Regularizer
It provides L2 based regularization.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import regularizers my_regularizer = regularizers.l2(0.) model = Sequential() model.add(Dense(512, activation = relu , input_shape = (784,), kernel_regularizer = my_regularizer))
L1 and L2 Regularizer
It provides both L1 and L2 based regularization.
from keras.models import Sequential from keras.layers import Activation, Dense from keras import regularizers my_regularizer = regularizers.l2(0.) model = Sequential() model.add(Dense(512, activation = relu , input_shape = (784,), kernel_regularizer = my_regularizer))
Activations
In machine learning, activation function is a special function used to find whether a specific neuron is activated or not. Basically, the activation function does a nonpnear transformation of the input data and thus enable the neurons to learn better. Output of a neuron depends on the activation function.
As you recall the concept of single perception, the output of a perceptron (neuron) is simply the result of the activation function, which accepts the summation of all input multipped with its corresponding weight plus overall bias, if any available.
result = Activation(SUMOF(input * weight) + bias)
So, activation function plays an important role in the successful learning of the model. Keras provides a lot of activation function in the activations module. Let us learn all the activations available in the module.
pnear
Apppes Linear function. Does nothing.
from keras.models import Sequential from keras.layers import Activation, Dense model = Sequential() model.add(Dense(512, activation = pnear , input_shape = (784,)))
Where, activation refers the activation function of the layer. It can be specified simply by the name of the function and the layer will use corresponding activators.
elu
Apppes Exponential pnear unit.
from keras.models import Sequential from keras.layers import Activation, Dense model = Sequential() model.add(Dense(512, activation = elu , input_shape = (784,)))
selu
Apppes Scaled exponential pnear unit.
from keras.models import Sequential from keras.layers import Activation, Dense model = Sequential() model.add(Dense(512, activation = selu , input_shape = (784,)))
relu
Apppes Rectified Linear Unit.
from keras.models import Sequential from keras.layers import Activation, Dense model = Sequential() model.add(Dense(512, activation = relu , input_shape = (784,)))
softmax
Apppes Softmax function.
from keras.models import Sequential from keras.layers import Activation, Dense model = Sequential() model.add(Dense(512, activation = softmax , input_shape = (784,)))
softplus
Apppes Softplus function.
from keras.models import Sequential from keras.layers import Activation, Dense model = Sequential() model.add(Dense(512, activation = softplus , input_shape = (784,)))
softsign
Apppes Softsign function.
from keras.models import Sequential from keras.layers import Activation, Dense model = Sequential() model.add(Dense(512, activation = softsign , input_shape = (784,)))
tanh
Apppes Hyperbopc tangent function.
from keras.models import Sequential from keras.layers import Activation, Dense model = Sequential() model.add(Dense(512, activation = tanh , input_shape = (784,)))
sigmoid
Apppes Sigmoid function.
from keras.models import Sequential from keras.layers import Activation, Dense model = Sequential() model.add(Dense(512, activation = sigmoid , input_shape = (784,)))
hard_sigmoid
Apppes Hard Sigmoid function.
from keras.models import Sequential from keras.layers import Activation, Dense model = Sequential() model.add(Dense(512, activation = hard_sigmoid , input_shape = (784,)))
exponential
Apppes exponential function.
from keras.models import Sequential from keras.layers import Activation, Dense model = Sequential() model.add(Dense(512, activation = exponential , input_shape = (784,)))
Sr.No | Layers & Description |
---|---|
1 |
Dense layer is the regular deeply connected neural network layer. |
2 |
Dropout is one of the important concept in the machine learning. |
3 |
Flatten is used to flatten the input. |
4 |
Reshape is used to change the shape of the input. |
5 |
Permute is also used to change the shape of the input using pattern. |
6 |
RepeatVector is used to repeat the input for set number, n of times. |
7 |
Lambda is used to transform the input data using an expression or function. |
8 |
Keras contains a lot of layers for creating Convolution based ANN, popularly called as Convolution Neural Network (CNN). |
9 |
It is used to perform max poopng operations on temporal data. |
10 |
Locally connected layers are similar to Conv1D layer but the difference is Conv1D layer weights are shared but here weights are unshared. |
11 |
It is used to merge a pst of inputs. |
12 |
It performs embedding operations in input layer. |