When Jim [name has been changed for this essay] responded to my advertisement seeking interview subjects for an IFTF research project on the effects of climate change on health and societal well-being, his answers checked all the right boxes. I was excited to have found what appeared to be an ideal candidate.
However, in our Zoom interview, something didn’t seem right. Jim’s answers were smooth but strangely hollow, more like report language rather than lived experience. He used terms like “cooling centers” and “resilience” but never grounded them in detail. His eyes kept darting to the side of the screen, as if he were checking notes or reading from a script.
Jim’s answers were also conflicting. He claimed to be 29 years old, but also 17 during the COVID-19 pandemic. He described surviving a wildfire in Bishop, CA, but his timeline skipped over the Silver Fire entirely, an event no local could have missed. His stories felt stitched together, fragments of real places woven into an unconvincing whole.
During my interview with Jim, he revealed that he had used automation to get himself recruited on other platforms, deploying bots to scan Craigslist and Rover job ads and automatically tweak his profile to match what clients were looking for. It was, he said, a business and survival tactic during hard times, and it worked. Sitting virtually across from him now, I realized that Jim was quietly using AI tools to coach him as we were speaking.
The immediate losses to the research were obvious — time, money, and unusable data. However, the more profound loss was one of trust. With AI secretly providing answers, how can we continue to rely on virtual interviews for qualitative research?
My experience was eerily similar to the story reported in Data & Society, where the authors describe participants strategically using generative AI to anticipate interview questions and craft polished, plausible answers to qualify for paid studies. The authors frame these behaviors not simply as deception, but as contextually situated performances, shaped by economic precarity, gig work pressures, and participants’ own strategies for navigating research systems. Like what I observed in my own work, these signals suggest that interview encounters are increasingly hybrid spaces, where human voices, AI assistance, and performance intersect, with repercussions extending far beyond research projects. Surreptitious AI coaching tools have and will continue to fundamentally reshape how we establish truth and authenticity in human interactions in the wider world.
Take the Cluely app, initially pitched by its founder as “a cheating tool for literally everything.” Cluely tracks your device’s screen, listens to its mic, and feeds suggested responses through a hidden overlay. The backlash has been immediate: companies like Validia are already rolling out counter-tools such as Truely, designed to detect and block Cluely’s use.
Back on the research side, platforms like Suzy’s Biotic have emerged to combat fraudulent market research participation. Biotic describes itself as a “proactive defense system” that stops bad data before it enters a study, specifically targeting the rise of research panel bots. The risk they highlight is sobering: when AI floods studies with manufactured responses, decision-making itself is compromised.
At the same time, researchers are incorporating AI tools into their own work. For example, a team from the University of Zurich faced criticism for an unauthorized experiment on Reddit users, deploying AI-generated comments to study how AI could influence opinions without obtaining consent. Meanwhile, many researchers are using AI to code and analyze large volumes of qualitative data, further entangling human and machine roles in the production of insight.
The implication is clear: as AI use becomes widespread, we risk moving from genuine individual thoughts to increasingly inauthentic, homogenized responses. If many participants rely on the same AI models, interviews and surveys could converge toward an AI-shaped “average,” flattening the very diversity of perspectives research depends on.
For qualitative research — and for anyone else who relies on interviews to inform their decisions — AI coaching presents both a challenge and an opportunity. If AI is integrated into everyday practice, it will inevitably reshape how both interviewers and interviewees interact. The critical questions become: How do we build rapport when an AI may be whispering in the participant’s ear? What constitutes authenticity when human and machine voices overlap? And, perhaps most importantly, how might this blending open new insights, not just into what people say, but into how they are learning to live, work, and express themselves alongside technology?
Interaction in an AI-Mediated Future: How we can prepare and respond
As AI increasingly mediates human interactions, we face a fundamental shift in how we connect, communicate, and establish trust. Rather than treating AI as purely disruptive, we might consider integrating it into our understanding of modern discourse, acknowledging that conversations now often include both human and machine contributions. This opens space for reflection on our evolving relationships with technology and highlights the hybrid nature of contemporary communication.
Not every situation will embrace AI integration. Some contexts may prioritize purely human interaction, adding verification layers to ensure unmediated exchanges. These choices raise crucial questions: How do we acknowledge AI's presence in our daily interactions? How do we value contributions that blend human and machine intelligence? Could AI-assisted communication itself provide meaningful insight into the pressures people navigate?
Looking ahead, we see signals of profound change: AI mediating our professional and personal relationships, synthetic personas becoming commonplace, and human interaction evolving into hybrid exchanges. While this risks homogenizing communication toward "average AI responses," it also illuminates new forms of self-expression and identity. Interaction is becoming a space co-authored by people and machines. Even subtle cues — hesitations, repeated phrases, and AI-mediated responses — reveal broader shifts in behavior and social norms. The challenge isn't just preserving authenticity, but noticing how our ways of living, working, and expressing ourselves are being reshaped through technology. What traces of AI are already shaping our daily conversations, and what might those traces reveal about the social, cultural, and technological landscapes we're all navigating together?