Creating Neural Networks

Building neural networks using drag and drop GUI is much easier and quicker. The framework is designed to simplify and augment the complexity of defining the layers.

Complex architectures can be rapidly created by using the ‘drag and drop’ feature. This allows networks to be built from scratch or existing networks to be refined or repurposed with a few clicks.

Layers

Choose the layer you wanted to add then define the specifications of the layer as asked on the right side box.

The list of layers in network creation

Data Layer

Data Layer defines what goes into the neural network. You can add either Images directly or Number data set.

Images:

If the data set is images, choose the dimension of input image and other augmentation parameters.

Parameters for Data Layer - Image Type

Parameters:

  • Name
  • image Width & Height
  • Image Type - Grayscale or Color
  • Zero Center - Yes or No
  • Normalization - Yes or No

Number:

If the data set is csv input, then define the input parameters of the data.

Parameters for Data Layer - Number Type

Parameters:

  • Name
  • Number Type - Float, Integer or Binary
  • Dimension of the Vector

Convolution Layer

Convolutional Networks are similar to ANNs with Neurons passing the weights and biases. Each neuron receives several inputs, takes a weighted sum over them, pass it through an activation function and responds with an output.

Reference: http://cs231n.github.io/convolutional-networks/#overview

Parameters for Convolution Layer

Parameters:

  • No. of Filters
    A filter is represented by a vector of weights with which convolve with the input.
  • Kernel Size (X,Y)
    Kernel size defines the depth corresponds to the number of filters we use for the convolution operation.
  • Stride (X,Y)
    Stride is the number of pixels by which we slide our filter matrix over the input matrix. Having a larger stride will produce smaller feature maps
  • Pad - Same or Valid
    A nice feature of zero padding is that it allows us to control the size of the feature maps. Adding zero-padding is also called wide convolution, and not using zero-padding would be a narrow convolution.
  • Activation - Linear, Sigmoid, ReLu, Tanh, Softmax, None
    You can choose multiple types of activation models like Linear, Sigmoid, Relu, Tanh, SoftMax or you can choose to keep it open.
  • Name

MaxPooling Layer

Max-pooling is done to reduce a number of parameters and prevent over-fitting. It also generalises the results independent to scale or orientation changes.

Parameters for MaxPooling Layer

Parameters:

  • Kernal Size (X,Y)
    Kernel size defines the depth corresponds to the number of filters we use for the maxpooling operation.
  • Stride Size (X,Y)
    Stride is the number of pixels by which we slide our filter matrix over the input matrix. Having a larger stride will produce smaller feature maps
  • Pad - Same or Valid
    A nice feature of zero padding is that it allows us to control the size of the feature maps. Adding zero-padding is also called wide convolution, and not using zero-padding would be a narrow convolution.
  • Name

Dense Layer

A dense layer represents a matrix vector multiplication. A dense layer is used to change the dimensions of your vector. It applies a rotation, scaling, translation transform to your vector.

Parameters for Dense Layer

Parameters:

  • Number of Units
  • Activation - Linear, Sigmoid, ReLu, Tanh, Softmax, None
    You can choose multiple types of activation models like Linear, Sigmoid, Relu, Tanh, SoftMax or you can choose to keep it open.
  • Name

Dropout

A dropout layer is used for regularization where you randomly set some of the dimensions of your input vector to be zero with probability keep_prob.

Parameters for Dropout Layer

Parameters:

  • Keep Prob
  • Name

Merge Layer

A merge layer merges two or more dimensionally compatible inputs along the specified axis.You can also specify the merging type.

Parameters for Merge Layer

Parameters:

  • Activation - Linear, Sigmoid, ReLu, Tanh, Softmax, None
    You can choose multiple types of activation models like Linear, Sigmoid, Relu, Tanh, SoftMax or you can choose to keep it open.
  • Merge Type - Concat, Sum, Average
  • Merge Axis
  • Name

Flatten Layer

Flattening will convert two or more dimensional input into single dimension, to be used by the fully connected part of the network.

Parameters for Flatten Layer

Parameters:

  • Name

RNN Layer

A Recurrent neural layer.

Parameters for RNN Layer

Parameters:

  • Number of Units
  • Number of Cells
  • Return Cell Out - True, False
  • Cell Type
    • LSTM
      LSTMs are explicitly designed to avoid the long-term dependency problem. These networks can remember the information for long periods to time. These LSTMs have the form of a chain of repeating modules of neural network.
  • Number of Layers
  • Name

Reshape Layer

The Reshape layer can be used to change the dimensions of its input, without changing its data. Just like the Flatten layer, only the dimensions are changed; no data is copied in the process.

Output dimensions are specified by the Reshape Param proto. Positive numbers are used directly, setting the corresponding dimension of the output blob. In addition, two special values are accepted for any of the target dimension values:

  • 0 means “copy the respective dimension of the bottom layer”.
  • -1 stands for “infer this from the other dimensions”.

As another example, specifying reshape_param { shape { dim: 0 dim: -1 } } makes the layer behave in exactly the same way as the Flatten layer.

You can additional dimensions using ‘Add Dimension’.

Reference - http://caffe.berkeleyvision.org/tutorial/layers/reshape.html

Parameters for Reshape Layer

Parameters:

  • Dimension_1
  • Add Dimension
  • Name

Loss Layer

Parameters for Loss Layer

Parameters:

Output Layer

This is non-training layer. This layer can be used to receive output from any layer during API creation. Just drop this layer on the canvas and connect your desired layer with it.

Output Layer - No Parameters

Parameters: No Parameters