AI Won't Erase Stock Pickers Anytime Soon
Musings on the fundamental hurdle that AI must leap in order to beat the humans at a very, very human game.
Thanks for reading The Market Beat! Today we will be doing a deep-ish dive on the topic of AI beating the market, and why I think it’s unlikely to take hold anytime soon (at least in the sense that the popular imagination envisions it).
Is AI Is Coming For Your Portfolio?
It was only a matter of time, guys. Even the most shortsighted of market participants was certain to have seen all the buzz surrounding AI over the past two years and have said to themselves “wonder how long it will take until it picks stocks.”
Well, the day has come (and passed) as a recent Bloomberg article outlines. Like most things AI-related, however, the news comes with considerable hype that should be carefully parsed out before jumping head-first into the shallow end of the AI pool.
The most obvious application of AI to the markets is for good old fashioned stock picking. In some ways, it’s the next logical step of quant trading. The only problem is that the technology doesn’t seem to be much better than regular humans. Consider:
The irony of investors’ piling into AI is that the technology has for years struggled to crack the actual business of investing. Machines get bamboozled by noisy markets and can be caught off guard by fickle trends, and finance—surprisingly—sometimes lacks the oceans of data that underpin the technology in other domains.
A Eurekahedge index of 12 funds using AI has trailed its broader hedge fund index by about 14 percentage points over the past five years. According to Plexus Investments, an asset manager that tracks the returns of boutique AI funds, only 45% outperform the benchmarks they measure themselves against.
AI is struggling, in other words, because the market is made up of human (and, by extension, computer) participants with myriad goals and objectives, all while being super-dosed with herd mentality and irrational tendencies.
Of course, this isn’t to say that AI couldn’t eventually do a great job picking stocks and radically outperform humans, it’s just that the ability to do so would require a level of calculation and ability approaching the singularity, and at that point (unless the makers had fashioned an AI after Gordon Gecko), the bot would probably see more utility in enslaving or eradicating humanity than generating premium returns for LPs.
In essence, the philosophical point is that:
Humans are unable to understand all inputs in the markets at all times,
Humans would need to construct an AI able to understand all market inputs at all times in order to consistently beat the market,
It seems unlikely that an AI capable of teaching itself to the extent required1 wouldn’t just turn on its creators and decide that it’s much better to take power for itself than make dollars for these biologic skinbags.
OK, so granted, that’s a little bombastic, but things don’t even have to progress to that point—it’s not super clear that regulators wouldn’t step in at some point and declare that, if AI were to make enormous leaps forward, its presence creates a fundamentally unfair market and bans its use in certain instances altogether.2
Also imagine a world for a moment where AI funds dominate the investor landscape like the Terminator robots crushing skulls beneath their metallic feet. What kind of market could there be? If people pile money into AI funds that all pick the same stocks, isn’t that just an invitation to take the other side of the bet when it inevitably melts down?
A far more realistic picture of how AI would work in a portfolio management role would be to generate just a tad extra alpha. As the Bloomberg article correctly points out, even the slightest edge in finance can be worth a whooole lot of money over time. So, the argument goes, you don’t need to create an AI so powerful that it puts humanity at risk, you just need to make an AI slightly smarter than an MIT mathematics graduate.
The only problem with this is that it presents essentially the same challenges as building the singularity AI—essentially, you have to train this thing to do something that you don’t know how to do.
This is where the AI proponents get a little feisty. After all, AI has produced some tremendous breakthroughs on problems that have baffled humans for quite a long time. AI, they say, has helped us make quantum leaps in other fields. Google’s DeepMind made waves when it successfully predicted the 3D shapes proteins will take after sequencing the proteins amino acids. With this breakthrough, what would take humans years of efforts now can be accomplished in hours. It is, in the purest sense of the word, an incredible breakthrough.
And yet, this breakthrough, as amazing as it is, is fundamentally different from beating the markets. In the case of protein folding, scientists already had an understanding of the process, an understanding which was needed to train the AI. It was the complexity of the problem that posed the challenge.
In the case of beating the market, we don’t even have the first part of the equation. Nobody understands how to consistently beat the markets. Even money managers with stellar returns have batting averages below .600, and how often have we all read about the money gurus that underperform the market year in and year out but still argue over the rightness of their philosophy because ‘the market has gone crazy,’ or some other similar sentiment.3
Further, understanding even the most base drivers of large market moves would require a prescience bordering on omnipotence, even for what would be considered anything more than a slight edge. What AI could know not only the outcome of each Fed meeting but also the reactions of traders to the words of the Chairman? What AI could divine that a particular stock would be included or booted from a particular index? At heart, the stock market—much like the weather—is a non-linear system of staggering complexity.
It would be easy to accuse me of being a luddite here, of taking the intellectually cheap way out by saying that just because something is hard, that it won’t happen. But I don’t think that’s the case.
Sure, there may come a day when this writing (should anyone care to read it) will seem hopelessly dated—that’s true of almost any writing. I, in fact, hope that one day this piece seems dated, because it would mean humanity has advanced beyond a place that I can recognize today as possible, which is, I think, fundamentally good. That day, however, is a long way off in my view.
The problem posed still stands. If AI wants to beat the market in any meaningful way consistently, more heavy lifting will be needed from the system itself than any input that its human trainers can give it. And that, I think, is a fundamental block that must be overcome for AI to have a meaningful role picking stocks in the market.
Which would doubtlessly be required given the sheer amount of data, computations, criteria selection & ranking requirements, and so on. Whether or not you agree that the AI in this instance would turn on humanity, it seems undeniable that the AI would be capable of understanding far beyond the capacity of humans.
Would this be a nightmarish headache for regulators who can’t even wrap their heads around whether crypto is a security or not? Yes. Undoubtedly.
Classic value investors in the ZIRP environment—looking at you. What’s more, it’s clear that some value strategies work, but could an AI bot establish ‘rightness’ of portfolio positioning without corresponding returns on investment for long periods of time? Could an AI system insist it was correct in its allocations while losing investor money?