Skip to main content
EDIT

Faculty Perspectives: New Rules for Effective and Responsible AI Use in Academic Research

Faculty Perspectives: New Rules for Effective and Responsible AI Use in Academic Research

Research from Assistant Professor Nofar Duani and her co-authors explore effective, ethical uses of generative AI in surveys and experiments.

09.25.25
decorative image of balance scales

[iStock Photo]

Stay Informed + Stay Connected

MARSHALL MONTHLY BRINGS YOU ESSENTIAL NEWS AND EVENTS FROM FACULTY, STUDENTS, AND ALUMNI.

Generative artificial intelligence tools based on large language models (LLMs) are quickly reshaping how researchers conduct surveys and experiments. From reviewing the literature and designing instruments to administering studies, coding data, and interpreting results, these tools offer substantial opportunities to improve research productivity and advance methodology.

Yet, with this potential comes a critical challenge: researchers often use these systems without fully understanding how they work. In a new article at the Journal of Marketing, Nofar Duani, assistant professor of marketing at USC Marshall, and collaborators Simon Blanchard (University of Toronto), Aaron Garvey (University of Kentucky), Oded Netzer (Columbia University), and Travis Tae Oh (Yeshiva University) offer a practical and accessible framework for responsibly using generative AI in survey and experimental research.

The paper, titled, “New Tools, New Rules: A Practical Guide to Effective and Responsible GenAI Use for Surveys and Experiments Research,” explores ways in which researchers can capitalize on the strengths of generative AI while navigating its unique risks.

“We wanted to create a hands-on guide that empowers researchers to be both efficient and rigorous in working with these tools,” Duani said.

The authors caution that the effectiveness of AI tools’ depends on how — and how well — they are used. Poorly written prompts, incorrect assumptions about the underlying technology and its accoupling tools, and failure to validate outputs can compromise research validity.

“We begin by explaining how GenAI systems operate, highlighting the gap between their intuitive interfaces and the underlying model architectures,” Duani explained.

The researchers examined different use cases throughout the research process, describing both the opportunities and associated risks at each stage. Throughout the assessment, researchers provided flexible tips for best practice along with firm, non-negotiable rules for effective and responsible AI use, particularly in areas pertaining to ensuring the validity of generative AI coded responses.”

The researchers highlighted several best practices for using AI tools effectively and responsibly. Where possible, they recommended a person provide their own sources — such as uploading documents or using retrieval-augmented systems (RAG) — to improve the accuracy of the model’s outputs. They emphasized researchers should always verify any factual information provided by the models, as these tools can sometimes generate responses that sound convincing but are factually incorrect.

When developing experimental measurements and manipulations, they stressed the importance of clearly defining the underlying constructs, since models can easily confound related but conceptually distinct concepts. Finally, when using AI to code unstructured data, researchers should carefully choose a conceptually appropriate validation strategy for the results — such as comparing them to human coders, self-reported measures, or behavioral outcomes — and commit to that strategy in advance to reduce restricting a researcher’s degrees of freedom.