Basic artificial neural network with input, hidden layer, and output

Artificial Neural Networks, an Introduction

Please follow and like us:

Artificial neural networks help solve problems in data science. They imitate neural networks of the brain.

Artificial neural networks help solve problems in data science. They model neural networks of the brain. 

Neurons function as organic switches that change their output depending upon the strength of the electrical or chemical inputs. A neural network within a person’s brain is an incredibly interconnected network of such neurons.

Repeatedly activating particular neural connections over others reinforces those connections, thereby inducing learning. This learning mechanism makes them more likely to generate the desired output given a specified input. Once the desired outcome occurs, a learning process provokes feedback.

Artificial Neural Networks Mimic Systems of Brain Neurons

Artificial neural networks provide a functional and mathematical model for imitating the behavior of a system of neurons. If they are simple enough, people can calculate such models on paper. Unfortunately, artificial neural networks are not particularly useful without greater complexity and high-speed calculations.

These techniques and models have been around for decades. 

However, it wasn’t until the recent advances in computing speed and power that complex artificial neural network models have become viable options for problem-solving.

Artificial neural networks (ANNs) are trainable via a supervised or unsupervised process. In a supervised ANN, the network is given matched input and output data samples. The samples are intended to get the ANN to deliver the desired output for a given input.

One typical example is an email spam filter. Figuring out whether or not an email is spam requires training of the ANN. The input training counts the words and symbols in the subject line and the body of the email. Then, the output training data attempts to classify whether the email is spam or not.

After numerous examples of emails go through the neural network, the network can learn what input data makes it probable that an email is spam or not. This learning process adjusts the weights of the ANN connections.

Activation

An ANN such as these employs an activation function to simulate a neural network. Activation functions have a value above which they “switch on.” 

For example, if an activation function is greater than a certain value, the output changes state from 0 to 1, -1 to 1, or 0 to >0. The sigmoid function is often used as the activation function.

The training algorithm uses a derivative of the sigmoid function.

sigmoid function
sigmoid function
graph of sigmoid function
graph of sigmoid function, which is used as the activation function for artificial neural networks

In networks of artificial neurons, outputs from one neuron take inputs from another. These artificial neurons are called nodes.

Each node takes multiple weighted inputs and then applies the activation function to the sum of the inputs to generate an output. The weights change the slope of the sigmoid function.

sigmoid function with weights applied as applied to nodes of artificial neural networks

This function multiplied by the sum of the weights can model the effects of multiple weights on a node. It is used when the node is activated at a strength of up to 1.

On the other hand, when you want to activate after the strength is greater then 1, you have to use bias.

Bias (both positive and negative) shifts the activation curve to the left by a certain amount.

sigmoid function with weight of 5 with bias as applied to nodes of artificial neural networks

If this node where part of an artificial neural network, its output would become inputs for other nodes. 

I will further explain the workings of ANNs in future posts.

Dr. Andy Thomas wrote a great introduction to artificial neural networks for his website Adventures in Machine Learning which is Part A of his forthcoming book, 0 to Tensorflow. This introduction goes much more in depth than my brief coverage.

Reference

Thomas, A. “An introduction to neural networks for beginners,” (n.d.). Available for free download from adventuresinmachinelearning.com.

Please follow and like us: