AllianceBernstein - AI Ethics and Regulation: How Investors Can Navigate the Maze

Generic Links

Welcome to RL360's

dedicated financial adviser website

For financial advisers only

Not to be distributed to, or relied on by, retail clients

AllianceBernstein - AI Ethics and Regulation: How Investors Can Navigate the Maze

 

 

From potentially brand-damaging ethical risks to regulatory uncertainty, AI poses challenges for investors. But there is a path forward.

 

Artificial intelligence (AI) poses many ethical issues that can translate into risks for consumers, companies and investors. And AI regulation, which is developing unevenly across multiple jurisdictions, adds to the uncertainty. The key for investors, in our view, is to focus on transparency and explainability.

 

The ethical issues and risks of AI begin with the developers who create the technology. From there, they flow to the developers’ clients—companies that integrate AI into their businesses—and on to consumers and society more broadly. Through their holdings in AI developers and companies that use AI, investors are exposed to both ends of the risk chain.

 

AI is developing quicky, far ahead of most people’s understanding of it. Among those trying to catch up are global regulators and lawmakers. At first glance, their activity in the AI area has grown quickly in the last few years; many countries have released related strategies and others are close to introducing them (Display).

 

 

 

In reality, the progress has been uneven and is far from complete. There is no uniform approach to AI regulation across jurisdictions, and some countries introduced their regulations before ChatGPT launched in late 2022. As AI proliferates, many regulators will need to update and possibly expand the work they’ve already done.

 

For investors, the regulatory uncertainty compounds AI’s other risks. To understand and assess how to deal with these risks, it helps to have an overview of the AI business, ethical and regulatory landscape.

 

Dive Deep to Understand AI Regulations


The AI regulatory environment is evolving in different ways and at different speeds across jurisdictions. The most recent developments include the European Union (EU)’s Artificial Intelligence Act, which is expected to come into force around mid-2024, and the UK government’s response to a consultation process triggered last year by the launch of the governmemt’s AI regulation white paper.

 

Both efforts illustrate how AI regulatory approaches can differ. The UK is adopting a principles-based framework that existing regulators can apply to AI issues within their respective domains. In contrast, the EU act introduces a comprehensive legal framework with risk-graded compliance obligations for developers, companies, and importers and distributors of AI systems.

 

Investors, in our view, should do more than drill down into the specifics of each jurisdiction’s AI regulations. They should also familiarize themselves with how jurisdictions are managing AI issues using laws that predate and stand outside AI-specific regulations—for example, copyright law to address data infringements and employment legislation in cases where AI has an impact on labor markets.

 

Fundamental Analysis and Engagement Are Key


A good rule of thumb for investors trying to assess AI risk is that companies that proactively make full disclosures about their AI strategies and policies are likely to be well prepared for new regulations. More generally, fundamental analysis and issuer engagement—the basics of responsible investment—are crucial to this area of research.

 

Fundamental analysis should delve not only into AI risk factors at the company level but also along the business chain and across the regulatory environment, testing insights against core responsible-AI principles (Display).

 

 

Engagement conversations can be structured to cover AI issues not only as they affect business operations, but from environmental, social and governance perspectives, too. Questions for investors to ask boards and management include the following:

 

  • AI integration: How has the company integrated AI into its overall business strategy? What are some specific examples of AI applications within the company? 
  • Board oversight and expertise: How does the board ensure it has sufficient expertise to effectively oversee the company’s AI strategy and implementation? Are there any specific training programs or initiatives in place? 
  • Public commitment to responsible AI: Has the company published a formal policy or framework on responsible AI? How does this policy align with industry standards, ethical AI considerations, and AI regulation? 
  • Proactive transparency: Does the company have any proactive transparency measures in place to withstand future regulatory implications? 
  • Risk management and accountability: What risk management processes does the company have in place to identify and mitigate AI-related risks? Is there delegated responsibility for overseeing these risks? 
  • Data challenges in LLMs: How does the company address privacy and copyright challenges associated with the input data used to train large language models? What measures are in place to ensure input data is compliant with privacy regulations and copyright laws, and how does the company handle restrictions or requirements related to input data?
  • Bias and fairness challenge in generative AI systems: What steps does the company take to prevent and/or mitigate biased or unfair outcomes from its AI systems? How does the company ensure that the output of any generative AI systems used are fair, unbiased, and do not perpetuate discrimination or harm to any individual or group? 
  • Incident tracking and reporting: How does the company track and report on incidents related to its development or use of AI, and what mechanisms are in place for addressing and learning from these incidents? 
  • Metrics and Reporting: What metrics does the company use to measure the performance and impact of its AI systems, and how are these metrics reported to external stakeholders? How does the company maintain due diligence in monitoring the regulatory compliance of its AI applications?  

 

Ultimately, the best way for investors to find their way through the maze is to stay grounded and skeptical. AI is a complex and fast-moving technology. Investors should insist on clear answers and not be unduly impressed by elaborate or complicated explanations.

 

The authors would like to thank Roxanne Low, ESG Analyst with AB’s Responsible Investing team, for her research contributions.

 

Important Information:

 

The views expressed herein do not constitute research, investment advice or trade recommendations and do not necessarily represent the views of all AB portfolio-management teams. Views are subject to change over time.

 

October 2024

Please note that these are the views of Saskia Kort-Chick and Jonathan Berkow of AllianceBernstein and should not be interpreted as the views of RL360.

Authors

Saskia Kort-Chick

Director of Social Research and Engagement—Responsibility


Jonathan Berkow

Director of Quantitative Research and Data Science—Equities


October 2024

Please note that these are the views of the views of Saskia Kort-Chick and Jonathan Berkow of AllianceBernstein and should not be interpreted as the views of RL360.

360 fund links

A range of AllianceBernstein funds can be accessed through our guided architecture products Regular Savings Plan, Regular Savings Plan Malaysia, Oracle, Paragon, Quantum, Quantum Malaysia, LifePlan, LifePlan Lebanon, Protected Lifestyle and Protected Lifestyle Lebanon, and also through our PIMS portfolio bond.