Neural Networks I
This page explores the basics of neural networks, and shows how to start building neural networks from scratch using TensorFlow. To get the most out of this page, you should be familiar with Tensors in TensorFlow, and with the mathematical concept of Gradients.
Neural Network Anatomy
As the name suggests, a neural network is a network, or graph, composed of interconnected units called neurons (sometimes these neurons are simply called units). Each neuron in the network receives one or more inputs and produces an output, which may be the final output of the neural network, or the input to another neuron.
The simplest neural network imaginable is composed of a single neuron, which receives a single input, and produces a single output.
Generally, each connection to a neuron (that is, each “edge” in the graph) has an associated weight. The output of any neuron is the weighted sum of its inputs.
So, for the single neuron above, the output is computed using the following equation, where \( w \) is the weight of the connection between the input and the neuron:
When a neuron has multiple inputs, the product of each input and its corresponding weight is added to produce the output.
To describe neural networks, we often refer to the different layers of the network. When the output of one neuron is used as the input to another, we say that the two neurons are in separate layers. The diagram below shows a neural network with two layers:
Fully Connected Layers
When building neural networks with TensorFlow, we construct them layer by layer; and the most basic type of layer is the fully connected layer, or dense layer. In a fully-connected layer, each neuron is connected to each of the inputs to the layer.
The 2 layers in neural network shown earlier are each examples of fully-connected layers; likewise, the illustration below shows a fully-connected layer with 2 neurons and 2 inputs: