Âé¶¹´«Ã½Ó³»­

Skip to main content Skip to search

Âé¶¹´«Ã½Ó³»­ News

Âé¶¹´«Ã½Ó³»­ News

Marketing Professor Offers Behavioral Researchers Roadmap for GenAI Use

Dr. Travis Oh's paper, co-authored with colleagues from Columbia Business School and other institutions, provides both a caution and a guide: while AI can dramatically speed up research, it also raises new risks for transparency, reproducibility and ethics.

By Dave DeFusco

When Dr. Travis Oh walked into the Association for Consumer Research (ACR) Conference in Washington, D.C., this fall, he knew the topic on everyone’s mind would be generative artificial intelligence (GenAI). What he didn’t expect was how much curiosity—and confusion—surrounded it.

“Everybody’s excited for GenAI,†said Professor Oh, an assistant professor of marketing at the Sy Syms School of Business at Âé¶¹´«Ã½Ó³»­. “But to be honest, many behavioral researchers still don’t know exactly what’s happening under the hood or what the best practices are. There are no established rules yet. That’s what we’re trying to help create.â€

Professor Oh co-led one of only two workshops selected for presentation at the ACR conference, a major recognition from one of the field’s top gatherings. His session, based on his Journal of Marketing paper, “,†offered a hands-on introduction to how marketing academics can responsibly integrate AI tools into their work.

The paper, co-authored with colleagues from Columbia Business School and other institutions, offers a roadmap for researchers eager to use AI to design surveys, run experiments and analyze open-ended data. It provides both a caution and a guide: while AI can dramatically speed up research, it also raises new risks for transparency, reproducibility and ethics.

Generative AI systems, like ChatGPT, Claude and Gemini, have quickly become fixtures in everyday life. They can write, summarize, analyze and even simulate conversations. For researchers, this opens new doors: designing experiments, coding data or generating realistic chatbot interactions for study participants. But, warns Oh, ease of use can be deceptive.

“The issue is that what you see in the chat box isn’t the whole picture,†said Professor Oh. “Underneath, there can be many different prompts or models running, and you don’t always know exactly what’s happening. For behavioral scientists, what’s most important is transparency and reproducibility.â€

In other words, researchers who rely on AI without understanding how it works may unknowingly introduce bias or lose control of their methods. That’s why Professor Oh and his co-authors strongly recommend using API-based access, which allows researchers to control the model’s parameters and document every step, instead of general web interfaces. 

“It’s easy to use,†he said of the public chat tools, “but probably not the wisest to use for research.â€

According to Professor Oh, the most common mistake researchers make is uploading confidential data into public AI tools, which can violate both privacy laws and university research ethics (IRB) guidelines. But there’s also a subtler danger: the temptation to use AI’s flexibility to overfit results.

“Because AI is so fast and cheap, you can run your data through it a hundred different ways until you get the result that supports your hypothesis,†he said. “Researchers might tell themselves, ‘Oh, maybe my prompt wasn’t good; let me just tweak it again.’ But at that point, you’re fooling yourself.â€

His team’s paper provides practical “rules of engagement†for avoiding these pitfalls. Chief among them is to document everything.

“You should always record what you’re doing—what model, what parameters, everything,†said Professor Oh. “Models change over time. For example, GPT-5 may not even have some of the settings we use today. But if you’re transparent about your process, others can reproduce or verify your results later.â€

To make their recommendations as accessible as possible, Professor Oh and his co-authors created a companion website, , which hosts free templates, reproducible code and example workflows in R and SPSS, both software tools used for statistical analysisand data management. The playfully named site—“questionable research†being an inside joke about transparency—invites feedback from researchers experimenting with GenAI in their own work. A sister site, , provides additional examples and tools.

At his ACR workshop, Professor Oh demonstrated how to integrate interactive chatbots into marketing and behavioral studies. Instead of using pre-written scripts, researchers can now design experiments where participants engage with AI characters in real time.

“This is a new tool that lets us answer new questions,†he said. “For example, now that many companies use AI for customer service, we can test whether it’s better for a chatbot to sound formal or conversational and how that affects customer satisfaction.â€

The workshop also explored coding unstructured data, such as open-ended survey responses, using GPT APIs. The biggest “aha†moment for attendees, he said, was realizing how detailed their instructions to AI needed to be.

“To get good results, you have to treat AI the same way you would train a graduate student,†said Professor Oh. “You wouldn’t just say, ‘Go code this.’ You’d define exactly what you mean, what to look for and how to interpret it. The same applies to AI. It’s not as smart as people think unless you guide it carefully.â€

For Professor Oh, the excitement around AI isn’t just about innovation, it’s about responsibility. His work urges researchers to slow down, think critically and document clearly in an era that rewards speed and novelty.

“GenAI is changing the landscape of research,†he said. “But that doesn’t mean we abandon rigor. In fact, it’s more important than ever. The future of AI in research won’t just depend on what these systems can do, but on how carefully and thoughtfully we choose to use them.â€

Share

FacebookTwitterLinkedInWhat's AppEmailPrint

Follow Us