Skip to main content
EDIT

Deloitte-Arkley Institute Report Sheds Light on Risk of AI Among S&P 500

Deloitte-Arkley Institute Report Sheds Light on Risk of AI Among S&P 500

The fourth annual report on risk factor disclosures highlighted the risks AI runs for companies in terms of cybersecurity, innovation, and reputation.

10.16.24
decorative image of man standing in front of huge AI sign

The Deloitte-Arkley Institute report explored the implications of AI for business. 

[iStock Photo]

Stay Informed + Stay Connected

MARSHALL MONTHLY BRINGS YOU ESSENTIAL NEWS AND EVENTS FROM FACULTY, STUDENTS, AND ALUMNI.

In partnership with Deloitte, the USC Marshall Peter Arkley Institute for Risk Management released a report on public companies’ annual risk factor disclosures, including those specifically related to AI. The answers were clear: the uncertainty and unpredictability surrounding AI heightens a variety of risks and produced several surprising results.

Out of the 434 companies reviewed, the fourth annual Deloitte-Arkley report found that 273 (over 60%) mention the technology in their risk factors. Arkley and Deloitte found wide variance in the sectors most affected by AI. Over 90% of companies in communication services mentioned AI in their disclosures. Nearly as concerned was the information technology sector and the financials sector (both over 80%). Meanwhile, just 40% of companies from the energy sector discussed AI.

Despite the high response rate, types of risk disclosures varied among the companies. The most common risk factor named was cybersecurity followed by, in order, a concern over a failure to innovate, competition, legal and regulatory risks, reputational damage, and ethical risks. Just 12 of the 434 responding companies discussed attracting talent with AI capabilities as a risk.

Marshall News sat down with Kristen Jaconi, Executive Director of the Arkley Institute, to discuss the report, its implications, and the most surprising responses.

Interviewer: What did you find most surprising about the report’s findings?

Kristen Jaconi: Deloitte and the Arkley Institute have focused in our 2022 and 2023 reports on two of the most challenging risks public companies face, cybersecurity and climate. However, this year’s report on AI-related risk disclosures may have elicited the most surprising finding we have made in our four years of researching risk factors: Companies across sectors are disclosing risks relating to responsible AI.

● Approximately 25% of the 434 S&P 500 companies reviewed disclosed that their use of AI posed reputational risks.

● Nearly 15% of companies were concerned that their use of AI posed ethical risks.

● 20% of companies reported their AI models or their related outputs could be flawed, biased, or defective or cause social harm.

Many of our largest public companies, not just technology companies, are recognizing the challenges of the alignment problem — how to make AI models align with our values.

Why is that surprising?

KJ: The Deloitte-Arkley report analysis evidenced a much more broad-based concern today about AI safety among our public companies generally, not just from the academic sector and the tech industry. Approximately a decade ago, the academic sector and the tech industry started to focus more on AI safety.

That’s why the intersection of technology and ethics is a major focus at USC Marshall. Marshall’s Neely Center for Ethical Leadership and Decision Making, which aims to align emerging technologies with ethical, human-centered values, is taking action to deepen understanding of the rapidly evolving technology. The Neely Center introduced its AI index, the first longitudinal public tracking survey of individual usage of and experience with AI to help inform future decision-making with the technology. The Neely Center has also co-hosted deliberative events, such as the July 2024 America in One Room: The Youth Vote event, which gathered nearly 500 first-time voters to gauge their views on a variety of issues, including the ethical implications of AI.

How does AI create a unique multiplicity of risks?

KJ: AI heightens many risks already disclosed in S&P 500 risk factors, such as cybersecurity, innovation, and legal risks. What may be unique about AI is its tentacles are reaching into many, if not most, of the risk categories companies use for internal reporting, such as strategic, technology, legal, financial, human capital, and reputational risks. Nearly 40% of S&P 500 companies discussed AI-related risks in at least two risk factors. That number will likely rise as companies better understand this emerging technology’s impact.

Companies most frequently mentioned AI-related risks in cybersecurity risk factors. Why is that?

KJ: Cybersecurity remains the most daunting risk facing our public companies. AI amplifies cybersecurity risk as remote work and geopolitics have, something we discussed in last year’s Deloitte-Arkley Institute report. Neither our largest public companies nor our small and medium-sized enterprises can fully protect against a malicious actor, such as a nation-state with limitless resources. Now, add sophisticated AI tools to the malice. That scenario is likely driving the frequency of these AI-cybersecurity-related disclosures.

Is concern over the “failure to innovate” a characteristic of the tech sector alone or does it extend to all markets?

KJ: The Deloitte-Arkley Institute report shows this fear of not being able to innovate with AI extends to most sectors, not just the information technology sector. Over 30% of companies noted in their risk factor disclosures their failure to innovate and incorporate AI technologies into their products and services would harm their competitive position, financial results, reputation, and/or customer demand. Nine of eleven sectors (all but the energy and utilities sectors) disclosed this risk. Our banks, our manufacturers, our healthcare companies, etc., fear they might not adequately capture AI’s benefits.

If companies feel failure to innovate with AI is a major risk, why did only 12 companies disclose the risk of not attracting AI talent, given that talent may drive innovation?

KJ: Most companies disclose talent recruitment and retention risks with little specificity about the skill sets of that talent. This lack of precision reflects the often-generalized disclosures we continue to see in these risk factors. And something the Securities and Exchange Commission’s 2020 risk factor disclosure reforms aimed to change.

Industry surveys tell a different story. For instance, in Deloitte’s first quarter 2024 CFO Signals report, 60% of the chief financial officers surveyed noted “bringing in talent with GenAI skills over the next two years is either extremely important or very important.”

Why does AI pose a unique legal challenge for companies that operate globally?

KJ: In a globalized economy with operations typically in dozens of countries, our largest public companies are having to grapple with complying with an ever-expanding set of laws and regulations. This holds true with respect to AI. In the United States, although there is no specific federal law governing AI, several members of Congress have introduced legislation. Many states have proposed or enacted AI-related legislation. The European Union has already issued rules regulating AI and many countries have proposed AI-related governance frameworks. These laws and proposals impact data privacy, intellectual property, and responsible AI, among other things.

What makes emerging technologies, like AI, a unique legal challenge is that often policymakers and regulators are trying to harness technologies that have already galloped full speed out of the barn and elude capture … at least for a bit of time. When and where regulations will ultimately land is a complicated guessing game for companies operating globally.

The Deloitte-Arkley Institute report does not mention AI and climate-related risks, the latter the focus of the 2022 report. Was there any connection made between the two risks?

KJ: Our research identified only one company making a connection between AI and climate-related risk in risk factor disclosures. An information technology company disclosed its concern of not being able to achieve its energy efficiency goals for its AI-related semiconductor products. Given the significant amount of energy needed to power AI, more and more companies will likely begin to speak of AI as amplifying the transition risk of not meeting their sustainability goals. Some of our largest tech firms are already trying to mitigate this risk in a somewhat surprising way: making deals for a carbon-free source — nuclear power — to satiate their AI-related energy demands.