July 31, 2023
|
Compounding Through the Hype |
Global equity markets flourished in the first half of 2023, with the MSCI World Index returning +15% in U.S. dollars (USD), and now up over a quarter from the third quarter 2022 trough. In a mirror image of the derating in 2022, the rise has been down to a rerating on fairly flat earnings, with the MSCI World’s forward multiple expanding from 13.7x to 17.0x.1
The sector picture has also been a reversal of 2022, as the market has been led by the growthier sectors that suffered last year: consumer discretionary, communication services and, in particular, information technology, with the outpouring of euphoria around the promise of generative artificial intelligence (AI) offering a new lease on life for tech mega-caps after a tough 2022. June saw a more general cyclical recovery, but up until the end of May, the “magnificent seven” or "MANAMAT"2, around a quarter of the S&P 500 Index by weight, had effectively delivered all the U.S. index returns, with the other 493 constituents being slightly down overall.3
The ChatGPT phenomenon
It is true that AI has entered its next chapter, with algorithmic and processing power advancements, in addition to the explosion of data in recent years, ushering in a new era of generative AI. These clever large language models (LLMs), powered by advanced machine learning (ML) algorithms and trained on an enormous number of parameters, analyse and learn from the vast amounts of data they are fed to generate original, human-like content at warp speed. Since the debut of chatbot ChatGPT by OpenAI late last year, the market has been preoccupied with how to understand, implement and price the accessibility advancements offered by generative AI.
What is unusual about the AI frenzy is that this isn’t a eureka moment. While the use of generative AI has surged since ChatGPT’s launch, narrower AI technologies like ML and natural language processing (NLP) have already been in use for a number of years. Face recognition, for example, uses ML algorithms to unlock your smartphone, and digital voice assistants such as Siri and Alexa use AI, NLP and ML to understand commands and carry out a range of tasks. AI algorithms are used in e-commerce to make personalised shopping recommendations, in clinical trials to improve drug discovery and efficiency, and elsewhere across an array of industries to automate a host of back-office tasks. The incremental improvement of models from learned behaviour, along with the arrival of big data and computer processing power advancements, have all played their part in the release of generative AI.
Nonetheless, there have been two major surprises this year. The first is the speed of consumer adoption. In 2006, it took Twitter nearly two years to reach one million users; in 2010, it took Instagram two and a half months; for ChatGPT, it took just five days, with the service reaching 100 million users in a then groundbreaking two months – at the time, the fastest adoption of any technology in history.[1] The second, and arguably more significant, surprise is the lack of barriers to entry to run AI code. The general assumption up until now, which we shared, was that large incumbents developing AI models would dominate given their economic moats: cloud expertise, computing power and massive stores of proprietary data – not to mention they have invested enormous amounts of capital to refine their AI capabilities. However, this doesn’t appear to be the case. New large-scale, open-source models based on readily available application programming interfaces (APIs) are public; anyone with a good level of coding knowledge can adapt and redistribute the data architecture to satisfy their own specifications without requiring the large computational power and storage space normally necessary to run these. While this has advantages from a consumer perspective (including access to customisable AI models at far lower cost), for corporates, the barrier to entry for trialling code has reduced to one person with a laptop. The moat seems not to be the AI technology itself, but rather other elements of the business model – for instance, access to proprietary data, customer base or the ability to provide services at scale.
The shovelers
As in previous tech cycles, the early winners of the “AI gold rush” have been the pick and shovel sellers, notably the semiconductor providers. A California-based chip designer, the clear leader in the graphic processing units (GPUs) that power AI applications, gained nearly $200 billion of market capitalisation in one day on powerful forward guidance. The other obvious shovels are the “hyperscalers” – cloud computing service providers – who are responsible for the infrastructure necessary for generative AI deployment, notably vast amounts of storage capacity and processing power. The global technology and software company we own commented that it is expecting AI-related products to boost Azure growth by one percentage point from next quarter, with a current revenue run-rate of circa $600 million. A further category of shovels is those offering AI services to customers; for instance, the multinational technology conglomerate we own incorporating its conversational AI tool Bard into its search engine, though this is triggering some cost concerns, or the global technology and software company we hold offering Copilot within its 365 product family, creating a first draft for users to edit within Word or enabling faster clearance of emails within Outlook.
Identifying the opportunities…
While the full impact of AI remains ambiguous, here are our early thoughts through the lens of our high quality investment approach and the stocks we own. Away from the technology-based shovelers, many of our companies already have a healthy degree of AI exposure, with further opportunities particularly in terms of cost reduction and value creation.
With an eye on the threats
In addition to where the opportunities lie, we are also focused on how change might adversely affect the companies we own. As ever, we worry about potential downsides more than we get excited by potential upsides.
Compounding through the hype
At the end of 2021, we were worried about both multiples and earnings. Following the derating in 2022, our multiple anxieties faded, just leaving us worried about earnings. The last three quarters have put both concerns back on the table, with the MSCI World Index’s forward earnings multiple back up to 17.0x, a level never reached between 2003 and 2019, while the multiple of the information technology sector at 27.4x is now worryingly close to its COVID-era highs.[2]
This elevated multiple is not on depressed earnings, with expected margins still close to all-time peaks, and consensus earnings expected to be flat this year before rising 10% in 2024, despite all the worries about a potential recession. It is true that the U.S. economy has proved more robust than expected, but the downside of that is that labour markets remain very tight, meaning that a continued monetary squeeze is required to get inflation down. Our view is that any resultant downturn is not in today’s earnings expectations… nor in the current multiple. We maintain that the world is an asymmetric place, with earnings downsides in bad times far higher than the upsides in good times. Our bet, as ever, is that pricing power and recurring revenue, two of the key criteria for inclusion in our portfolios, will once again show their worth in any downturn, and the market will once again come to favour companies which have resilient earnings in tough times.
Ultimately, it’s still early days for generative AI, and its full impact remains unclear. Which industries and companies will thrive and whose business models will be made redundant? What does employment, education, health care, finance, consumption and politics look like in an AI world? What does copyright mean in a machine-generated world? Will regulation be fast and sensible enough to put guardrails in place without hindering progress? Whose advice can we trust? Whose image is real? Does the world get smaller, faster and even more personalised? Does it become more unequal? And what becomes of the artisanal craft of the bottom-up fundamental portfolio manager?
Our team has been exploring new datasets and developing automation tools for some time, keeping an eye out for any valuable signals from sentiment or NLP analysis of earnings calls. We remain front-footed about innovation and yet steeped in a tradition of research excellence and management evaluation. We have yet to abandon deep research on the sustainability of return on operating capital, team-based debate, absolute risk management, or human judgement. We trust in our team's experience and we shall continue to create solutions and build relationships, endeavouring to do that better and better. We have no intention of replacing human intelligence with bots when managing portfolios or servicing clients any time soon – the outcomes and our clients are too important for that.
Glossary
|
Managing Director
International Equity Team
|
Emma Broderick
Analyst
International Equity Team
|