Last weekend I fielded several calls from friends asking my opinion of what was going on in the stock market after the “Flash Crash.”
Some had heard there was a big order mistakenly entered (n0t true), and others had heard that there was some kind of computer program glitch (also not true).
After the first couple of calls, I settled on using the analogy of the cyberstential movie series The Matrix. As if they were reading my mind, the Turner Classic Movie cable channel featured the first in that series as a “classic” over the weekend.
Speaking as one who was at the cutting edge of computer technology in the 70’s, a Wall Street “rocket scientist” in the 80’s, early adopter for cell phones and personal multi-tasking in the 90’s, and born-again Luddite today, I have to say that progress is not necessarily defined by giving more and more of our decision making over to algorithms.
Does this mean I reject using computers to help make investment decisions? Hardly. Do I believe the computers are always right? Absolutely not.
Remember, I was one of the people who programmed and used them, so I know they did what I told them to do, not what I wanted them to do.
If I lined up all the times I or programmers I supervised made a mistake or didn’t think of every possible combination of inputs, the list would extend to Chicago, I imagine.
But let’s start at the beginning, because stories make more sense that way.
When Charles Dow put together his eponymous theory more than a hundred years ago, he was giving a way to chart the interplay of human emotions and the economy in buying and selling stocks.
Ralph Elliot was doing the same with his Wave Theory, published in the 1930’s and 1940’s, and updated in the 1970’s by Robert Prechter and a number of other practitioners.
Mandelbrot’s amazing pattern-replicating fractal patterns from nature burst into the popular imagination in the 1980’s, and he has published several books the past few years applying those mathematical patterns to finance.
A skeptic might say that having many different mapping methods used to describe and predict the same processes might make you discount all of them, but that would miss one of the basic wonderful truths about mathematics.
In math, the description of a function (like, for example, the graph of the price of a stock) might be in terms of polynomial functions, or might be in terms of sine wave functions, or even in terms of exponential functions. We describe these alternate descriptions of the same (and all) real continuous functions as using different functional basis sets to span the space.
For the engineer, physicist or mathematician choosing which set of basis functions to use, the set of basis functions needing the fewest terms (number of separate basis functions) to approximate the target curve (that price graph) might be the best to use. Before computers, there can be little doubt that the quickest, shortest expansion was the best choice.
Think of the Taylor expansion (polynomials) versus the Fourier series that uses sine waves of various frequencies. The set of functions that needed the fewest terms to get within the observation error of the real world system being observed or predicted was “best”, because it took the fewest hand calculations.
That was the world of Wall Street quantitative research when I entered the field in the early 1980’s. We were looking for groups of basis functions that could describe the various components that led to mortgage prepayments, and looking at our “models” in terms of the error size over the data we had — the history of a hundred million mortgages over a fifty year period.
We had plenty of potential “causes” for prepayments, including house prices, wages, inflation, mortgage rates, unemployment rates, and even weather and seasonality (like the school year, income tax refunds or vacation season for second homes).
We were notorious for using up every computing cycle our companies could afford to buy when we tested our models. The process of model-building was more than brute force computing, though.
It took a certain amount of understanding of the economic forces and emotional stimuli to get the models right. For example, nothing caused a surge of refinancing like having mortgage rates drop a lot, and then go back up.
As people realized they had missed the absolute bottom in rates with their refinancing, they became desperate to lock in the savings by paying off their higher rate mortgages. We could reliably predict prepayments twice as fast three months after the absolute bottom in rates than at the very bottom, so we modeled it.
We even had a surge of prepayments built into our models if the rates were the lowest in 3, 4, 5 or 10 years. We called it the “USA Today effect” because the headline was sure to run in that paper and on TV news if the rate in Freddie Mac’s Thursday announcement was the lowest in X years.
Emotions are crucial in important financial decisions. That’s why price and yield levels in stocks and bonds have easily identified “support” and “resistance”. Basically, regret establishes those levels.
At resistance tops, those are high prices that owners of the stock remember and wish they had sold for after suffering through later declines, sometimes for years. When the stock finally gets near that old high price a second, third, or fourth time, there are some owners determined to take the profits they wish they had taken in the past. The stock really can’t get past that price until all those potential sellers have sold. Hence the term resistance.
At support bottoms, there are low prices where buyers wish they had bought before. Support and resistance are both places where lots of people bought and sold before, which makes sellers at support decide to sell so they don’t lock in losses, or buyers at resistance worried that the stock might get away from them (especially if they are short sellers).
All technical analysis systems are an attempt to model, with algorithms and formulae, the behavior of the participants in the market and their emotions. If enough people use the models, they become self-reinforcing.
At the next level are the behavioral models large investors develop to help them make their sizable moves in and out of positions.
I was lucky enough to see this first-hand at a firm that a reasonably large stock portfolio. When I wanted to buy a substantial position relative to an illiquid stock (say half or more of the average daily volume), we used algorithmic trading programs to submit our orders to any and all exchanges. The actual trades were in 100 to 500 share lots, but the algorithm would place the orders somewhat randomly, backing off if the price started to go away from us. It would even occasionally sell stock if the price rose too much.
The programs could literally be set to buy a specified amount of stock over a specified period, and the portfolio manager could monitor the algorithm’s progress and stop it if the price were too high. Before the buy program was even started, the traders on the equity desk would tell me what sort of premium (how many cents) I might expect to pay versus how the stock would trade without my orders.
Now comes The Matrix. If I could use a program to buy my stocks so Joe Retail or the market makers would not take (as much) advantage of me, another programmer could use a program to recognize my programs.
Then they could rent space 300 feet from the floor of the Exchange, and get their buy and sell orders in a few thousands of a second faster than me, a hundredth of a penny per share away from the price my program would be paying or asking. Then they could match their trade a millionth of a second later to take out that hundredth of a penny by trading to me when my order finally arrived with the electronic delay of being submitted two states away and having several switches and more than a hundred miles of copper to go through.
Their programs (High Frequency Trading) would become a “tax” on me and everyone else putting in orders the “old fashioned” electronic way.
But wait, there’s more. Since all the technical trading models run off the same historical data, on any large directional move in the markets, the subtle differences that came from choosing Taylor or Fourier expansions (figuratively speaking) would simply turn into a massive agreement among all the models that the trading was going to be one way only.
Without the chemical analog computers (people) slowing everything down by getting involved in the buying and selling, the digital computers owned by the hedge funds, mutual funds and pension funds could merrily put in their orders to sell proxies for the S&P 500 index (baskets of a few dozen stocks that programs showed closely modeled all 500, if weighted properly).
Of course some stocks, like Accenture, had heavier weighting in the baskets than they had in the index. It went from $40 to a penny. Even monsters like Proctor and Gamble were free to trade down far more than the index or the market was trading down ($60 to $40 in minutes). With close to 80 different places to trade, rather than just the Big Board that used to operate with human beings filling the orders, the computers could simply keep going as all the support levels crashed in front of their relentless selling, feeding on itself.
Then the HFT algorithms got their electronic synapses into the game, and they managed to get the orders executed even faster than the mutual funds by sending them a few hundred feet away over fiber optics.
The program Smith was taking over the program Oracle, (and all the other programs).
All we dumb humans could do was watch and wonder, as the electronic ticker told us the Dow Jones 30 industrials were gapping down by 50 to 100 points in the amount of time it took to refresh the LED displays.
More later on some of the species within the Bad Bot genus.