By Irene Aldridge
Once you add Neural Networks to your existing analytical toolset, you can benefit from detecting patterns in data that cannot be captured otherwise. This article shows, with IBM as an example, how using Neural Networks unlocks statistical insight, you could not have achieved without it.
Technical analysis is a discipline that has managed to survive it all over more than 100 years. Starting from the 1920s, and possibly even earlier, technical analysts persevered at distilling meaning from patterns in the data. Numerous generations of quants declared technical analysis to be dead, only to resurrect it in various Auto-Regressive Moving Average (ARMA) specifications.
Today, neural networks are the new iteration of technical analysis, delivering nimble and highly profitable pattern recognition via machine learning. Figure 1, for example, shows cumulative profitability of a five-layer neural network trained on daily IBM data.
Figure 1. Cumulative performance of daily reinvested strategy for IBM stock based on a five-layer neural network.
The latest variation of technical analysis comes to us as Neural Networks. The established technical methods, such as “head and shoulders” or even the sophisticated ARIMA-based methodologies, are designed to look for specific patterns. For example, a “head-and-shoulders” formation stands for three consecutive peaks in the price data, which, when identified, compel the analysts to make trading decisions. Regressive methods are looking for persistent relationships of financial returns relative to a set of indexes based on prior return data and other factors. Regressive models often assume linear or other specific functional dependency between future returns and past data.
In contrast, neural networks make no assumption whatsoever about the data or the functional shape of the future return to past data relationships. Give a neural network a bag of data, tell it what variable to predict and set it to work! The neural network will then iterate through a myriad of superimposed linear and nonlinear functions to find a set that fits your data best. An epitome of machine learning, neural networks do what their area suggests: let machines learn how the inputs and the outputs fit together and suggest the most plausible out-of-sample outcome.
As we show in our new book, “Big Data Science in Finance” (with Marco Avellaneda, Wiley, 2021), neural networks are staged sequences of linear and nonlinear transformations. Each stage, known as a neural network layer, is assigned one specific functional to take the data from the previous layer, convert it using the given layer’s functional and feed it to the next layer.
The neural network for IBM results shown in Figure 1 contained an input layer, a linear transformation layer, a layer deploying a tanh function, another linear layer, and the final output layer. Typically, successive layers have a diminishing number of inputs and outputs. Once specified, a neural network recursively cycles the given data through the pre-specified sequence of layers until the data prediction error, known as loss, is minimized. The output is presented as a non-linear model that can be used to predict future values of, say, financial returns, based on past data.
Of course, some work on behalf of an analyst or a researcher is still required. First, a researcher needs to select the input data. As the old saying “garbage in, garbage out” goes, carelessly selected data may produce spurious and irrelevant results, wasting the researcher’s time.
Second, a competent researcher should have a feel for what activation functions to select. As we show in Chapter 2 of the “Big Data Science in Finance” book, the activation functions should ideally reflect the distribution of the dependent variable, although various discrete implementations are also possible.
Last, but not least, a qualified researcher will carefully balance the depth of the neural network, that is, the number of neural network layers, and the activation functions assigned to each layer, making sure the network is not just deep for the sake of increased complexity, but has a meaningful implementation instead.
In the end, a successful neural network requires knowledgeable implementation. Like a great technical analyst, a good neural network researcher will develop a “feel” for the data and the thoughts on what variables and functionals work which do not work, and in which setting. Different input and output data will require different treatment and approaches. For a step-by-step video covering how to build a return-predicting neural network in Python using PyTorch, please see this video.
Irene Aldridge is a co-author of “Big Data Science in Finance” (with Marco Avellaneda, Wiley, 2021). She is a Managing Director of AbleMarkets, a Big Data platform for Finance, a portfolio manager, and an Adjunct Professor at Cornell Financial Engineering program in Manhattan.