Skip to main content

Experts Discuss Ethical Dimensions of AI

Experts Discuss Ethical Dimensions of AI

The Neely Center and the Initiative on Digital Competition gathered leading minds from industry and academia to discuss what’s now and what’s next in responsible AI.

Color photo of room filled with audience and guest speakers on a panel

Industry practitioners and leading academics collaborate and share insights on the challenges and opportunities of AI technologies.
[USC Photo]

Stay Informed + Stay Connected

Artificial Intelligence (AI) is ubiquitous and, for the most part, becoming easier to use. It can drive your car, answer a chatbot question, and create a graphic. It can even name that tune or write this article (it didn’t). Amazing opportunities abound, and so too do the challenges: data leaks, privacy breaches, copyright infringement, among others.

While few want to put the genie back in the bottle, working to mold principled guardrails was the purpose of an important gathering on January 29, in the USC Vineyard Room. Marshall’s NEELY CENTER FOR ETHICAL LEADERSHIP AND DECISION MAKING and the INITIATIVE ON DIGITAL COMPETITION (IDC) joined forces to host the “Responsible AI in Business: Ethical Challenges and Opportunities” panel conference. Industry leaders and academic experts shared insights from the field and from research that focused on guiding the societal and economic changes produced by the rapidly-evolving technology.

In his opening remarks, NATHANAEL FAST, director of the Neely Center and the Jorge Paulo and Susanna Lemann Chair in Entrepreneurship, welcomed the audience, thanked the guest panelists, and set the tone for the three-panel conference.

“Rather than slow down the technology, we’re trying to speed up and ramp up society’s capacity to handle it,” he said. “And, we do that through data collection tools, through building networks, and through building spaces like this one — one person, one group, one society at a time trying to navigate these changes that we’re facing as a society.”

In addition to Fast, the half-day conference was co-organized by D. DANIEL SOKOL, the Carolyn Craig Franklin Chair in Law for USC’s Gould School of Law and an affiliated professor of marketing at Marshall. As a core faculty member of IDC who focuses on the transformation of the digital landscape, Sokol commented on the significance of why the two Marshall entities planned this event.

“If we think about what a platform does, it connects people or things. So we are here trying to connect different units on campus with students, faculty, and the broader community who have a vested interest in this innovation process,” Sokol told the audience. “We decided to have what you can call a ‘joint venture’ across two different centers with complementary skills and interests. This is increasingly the kind of thing that we’re seeing at USC more broadly. What we want is dialogue, real discussion, and real learning from each other.”

The three distinct panels featured viewpoints and expertise from various sectors that presented the ethical dimensions surrounding AI in the business world. The discussions covered a wide cross-section of subject matters that are all interconnected: fairness and transparency; practitioner perspectives; and responsibility and governance.

The panelists included research presented by Professors NAN JIA (Management and Organization) and ANGELA ZHOU (Data Sciences and Operations) as well as Hengchen Dai (UCLA); legal perspective from Bobby Ghajar (Cooley LLP) and safety awareness from Arushi Saxena (DynamoFL); and business implications of AI use from industry experts Genevieve Bartlett (USC Information Sciences Institute), Guy Ben-Ishai (Google), Maarten Bos (Snap, Inc.) and Dominic Peralla (Character.AI).

The goal was not to present concrete solutions, but rather to host an honest conversation exploring an exchange of ideas to better understand the ethics of AI, its legal implications, and its future potential in various business domains.

We are about to change humanity as we know it with this technology.

— Guy Ben-Ishai

Head of Economic Policy Research, Google

Panelists discussed how AI-developing companies could articulate clear policies, set governance plans, and enact auditing frameworks that could minimize risks of data leaks or unauthorized use of copyrighted work. Speakers also focused on how the practical uses of AI could maximize performance productivity and engagement, while also looking forward to its ability to push the frontiers of scientific discoveries.

Although the technology is moving faster than humans are adapting, the challenges of AI are not insurmountable if the broader community takes heed to the strategies and recommendations discussed in the Vineyard Room. The potential for innovation is boundless. As Guy Ben-Ishai, head of economic policy research at Google, remarked during his panel, “We are about to change humanity as we know it with this technology.”