Let's talk

The next systemic risk? What AI means for investors

×
Video - Podcast
Translations from English are done by AI, without human oversight, and may not be accurate
Investment Responsible investment and stewardship AI Risk
Julian Thakar Investment Consultant
Busy road at night

Artificial Intelligence (AI) is one of the defining issues of our time. With the potential to transform how we work, communicate and make decisions, AI presents both exciting opportunities and notable risks for investors.

This article discusses the risks; the opportunities of course should also be considered.

As investment consultants at LCP, a key part of our role is to help our clients identify and manage investment risks. When it comes to AI, we believe it is important to distinguish between three categories of investment risk:

  • Direct, company-specific risks – arising from an individual company’s use or development of AI.
  • Market-wide (or systematic) risks – broader risks that affect many or all companies in the market.
  • Systemic risks – the kind that could cause instability (and potentially even the collapse) of the financial system.

Whilst the first can often be mitigated through diversification and engagement, and the second through sensible asset allocation, systemic risks are much harder to manage, and potentially far more disruptive. In this article, I focus on AI as a systemic risk, and steps that can be taken to address it.

How could AI be a systemic risk?

The list below is by no means exhaustive, but outlines some of the key talking points.

Opacity and control challenges

Many AI models operate as “black boxes”. They can make decisions without offering clear explanations, which creates governance and accountability challenges. This lack of transparency makes it difficult to understand or audit how decisions are being made.

For example, imagine a bank using an AI system to assess loan applications. If this system denied you a loan, but could not explain why, would you judge this as fair? This could undermine customer trust and expose the bank to regulatory or reputational risk.

The risk becomes even greater as AI systems are given more autonomy. As models become more complex, there is a growing concern about loss of human control. If systems behave inexplicably or in ways misaligned with human intent, the impact could be felt across entire markets or sectors.

Some experts also warn of more extreme “loss of control” scenarios in the future, where highly capable AI systems could pose existential risks to humanity. While these scenarios are speculative, they are receiving growing attention from global experts. This theme has of course long been explored in science fiction, including one of my favourite books: Prey by Michael Crichton.

Herd behaviour

AI has the potential to significantly transform the financial sector, with many financial institutions already using it to optimise internal processes. However, widespread reliance on similar AI models can create problems.

One example of how this could manifest itself is in market trading. In times of stress, if AI systems are trained on similar data and designed to optimise in similar ways, they may respond identically, for instance by triggering mass sell offs. This herd-like behaviour could amplify market volatility and threaten broader financial stability.

We have seen the dangers of this kind of dynamic previously, in early automated trading. These strategies reacted so quickly to market movements, that they sometimes triggered sharp, sudden price drops, known as “flash crashes”. In response, exchanges such as the London Stock Exchange introduced circuit breakers (temporary halts in trading), to curb the impact of automated herd responses. The growing use of AI models raises the possibility of similar systemic behaviours on a larger, more complex scale.

Cyber risk

Cybersecurity is already a well-established concern for many of our clients. However, the adoption of AI introduces new vulnerabilities that could significantly amplify existing cyber risks. The Darktrace survey extract below highlights that AI-related cyber risk is currently a major focus for cybersecurity professionals within businesses.

Source: Darktrace survey of 1,500 cybersecurity professionals

AI systems can be exploited, through attacks that manipulate AI outputs, or by compromising the integrity of data. Furthermore, AI can be used maliciously, enabling more sophisticated phishing, deepfakes or automated hacking tools. 

As AI becomes more embedded in critical infrastructure and financial systems, the potential impact of cyber-attacks will grow accordingly.

Exacerbation of climate risk?

Climate risk is a key systemic issue we have done significant work on at LCP. AI’s relationship with climate is two-sided, presenting both opportunities and challenges.

On the plus side, AI could enhance energy efficiency in companies and infrastructure. AI can also aid in predicting and managing both physical and transition climate risks. AI is even being used to accelerate breakthroughs in clean energy, such as nuclear fusion, by modelling and optimising complex systems.

Contrastingly, data centres, which house the millions of processors powering AI, consume substantial energy and water. For example, it is estimated that a ChatGPT search uses around 10 times the energy of a traditional Google search. 

Source: EPRI, 2024 White Paper, “Powering Intelligence”

Given the rapid adoption of AI, it is unlikely that the huge increase in energy demand will be met from renewable sources only, and so fossil fuels are likely to fill the gap, at least initially.

The challenge for the coming years will be ensuring that AI’s climate (and other) benefits outweigh its environmental costs. Investors, policymakers and technology leaders will need to collaborate in order to align AI development with climate goals.

Social disruption

AI will re-shape labour markets and displace jobs, which has long term implications for social cohesion. The UN has warned that AI could impact 40% of jobs worldwide, and could exacerbate global inequality over the coming decade, through reducing the competitive advantage of low-cost labour in developing countries. Analysis by the International Monetary Fund (IMF) also supports this claim, as shown in the chart below.

Source: International Labour Organization (ILO) and IMF staff calculations. “Complementarity” refers to the extent that AI is likely to complement (rather than replace) human work.

Regarding climate change, there is significant commentary on how the transition to net zero should be a “just transition”, benefiting workers and communities as well as the environment. I argue that we should also be striving for a “just transition” to AI, ie one that shares its benefits broadly and mitigates its social harms. 

My recommendations to address systemic AI risks

At LCP we believe that an important way to address systemic risks to investments is through systemic stewardship, ie where investors use their influence at a systems level. But I am cognisant of the fact that our clients are already very busy and most have limited bandwidth to focus on this area. With that in mind, I suggest five straightforward actions that asset owners could take to begin addressing AI risks:

  1. Add AI to your risk register 
    Whilst being a small step, including AI as an emerging systemic risk on your risk register will focus attention and lay the groundwork for future conversations and actions.
  2. Engage with your investment managers
    Ask managers what steps they are taking to assess and manage AI-related risks, both at the company level and at the systemic level. Also ask how AI is being used within their investment process.
  3. Add AI as a stewardship priority
    Consider adding AI to your list of stewardship priorities. This will ensure it gets ongoing attention in manager reviews, documentation and engagement strategies.
  4. Stay informed 
    Ask your advisors for training on the implications of AI and to keep you up to date on key developments, so if and when further action is required, you can respond in a timely and informed manner.
  5. Collaborate with others (where resources allow)
    As with climate change, addressing systemic AI risk will require collective investor influence. Participating in collaborative initiatives can help shape the development of global AI governance standards. 

Conclusion

AI represents a systemic shift with far reaching implications for financial markets, society and the planet. The scale and speed of AI adoption mean that investors should not wait on the sidelines to respond. Taking the small, proactive steps I have suggested above, would be a good starting point for managing AI-related risks. 

Subscribe to our thinking

Get relevant insights, leading perspectives and event invitations delivered right to your inbox.
Get started to select your preferences.

Subscribe