Using Neural Networks to Trade

BUILDING BRAINS

July/August 1997

“Designing an effective neural network for financial market analysis is more an art than a science” says Lou Mendelsohn, a well-known developer of neural networks and CEO of Florida-based Market Technologies.

A network typically consists of three layers – an input, layer, an output layer and a hidden layer.

The input layer simply presents data to the network. Data is not raw but is preprocessed or ‘massaged’.

Outputs can take many forms including real numbers like the next day’s high, closing prices or predictions of moving averages.

The hidden layer – composed of neurons connected to neurons in the input and output layers – is where the network recodes input data to capture hidden relationships, thus allowing the network to make generalisations about market behaviour.

“oo few neurons in the hidden layer prevent the network from correctly mapping inputs to outputs. Too many impede generalisations, letting the network ‘memorise’ patterns rather than discover underlying relationships.

Once the structure is operative, training of the system can begin. During training, the network forecasts its outputs, errors are computed and ‘connection weights’ between neurons are adjusted prior to the next training iteration.

“Connection weights are altered by an algorithm, known as ‘the learning law’ to minimise subsequent output errors. Perhaps the most widely used learning law is the generalised delta rule or backpropagation method,”, says Mendelsohn.

When training a system, it may be necessary to modify network parameters, network architecture, the learning law, input data and preprocessing. The output itself may even need to be redefined.”

A trained network can be used to make forecasts with real-time data. Daily updates are performed, giving the network each day’s inputs as occurred during training, except no weight adjustments are made as the network makes it’s predictions.