Spring launch 2025 | Discover our latest AI-powered innovationsExplore launch
Go to app
BlogInspiration

Mediated meaning: AI’s role in the research craft

Last updated
27 June 2025
Published
12 June 2025
Creative
Sherline Maseimilian

At Insight Out 2025, we brought together a panel of research leaders to unpack how AI is reshaping the research craft. What’s the promise? Where are the pitfalls? And what is the right balance between human expertise and AI automation?

The conversation, moderated by Dovetail’s head of product marketing, Alissa Lydon, brought together three minds with their own nuanced perspective: Kaleb Loosbrock, Staff Researcher at Instacart; Sam Ladner, research consultant and author; and Jared Forney, Research Operations Principal at Okta.

We’ve distilled their 40-minute discussion into a Q&A, editing their responses for clarity and flow.

How has AI changed research from your perspective? What’s the promise in it, and where are the pitfalls?

Jared Forney: The AI enhancements that we have found in our organization are fairly recent. Being a very security-conscious organization, we’ve approached things consciously, thinking about the potential impacts not only for our business but also for the insights that we're distributing.

Where we’ve really started to find inroads is not ironically with Ask Dovetail—it has been a really important part of where our AI journey has started to take off. It’s helping our researchers evolve their role, not just being a voice for the customer but also being a voice for the organization. It’s helping them distill and take some of the things that would have been really time-consuming—everything from administrative tasks to really helping get a better sense of the big picture, distilling large volumes of data—and really helping them do what they do best, which is to be that advocate at the corporate level and tell that story in a way that nobody else can.

Sam Ladner: For me, I think I’ve been using forms of artificial intelligence for a really long time. I mean, I remember coming back from a field study when I was at Microsoft, maybe 10 years ago, using voice recognition to help me summarize my field notes much more quickly. So it was a productivity enhancement. I had a photo app on my laptop that helped me auto-code some of my photos. I didn’t find this to be revolutionary at all at the time, and I still don’t really.

What I’m starting to see is more and more people willing to take AI to do the work that the researcher’s good at. The researchers who are able to use AI to their advantage are the ones who recognize who they are and their skillset.

What does the balance between human and AI automation look like in practice? Is there a hard line where AI ends and the human begins?

Sam Ladner: No, I don’t think there should be a quote-unquote hard line. There’s a meta point here: as researchers, our superpower is being able to abstract deeper meanings from our interactions with users and customers. So we should use that same skill here. We should look for the human aspect of what we’re trying to find out. And what does that look like? It looks like meaning. It looks like symbolism. It looks like humor. It looks like rapport. These are the kinds of things that we should be able to dive into much more richly.

Kaleb Loosbrock: That’s something that every organization is struggling with. It’s a perfect example of where research should be able to step in and say, “Okay, here’s what we’re gonna do. We’re gonna make sense of this. We’re gonna give you meaning. We’re gonna tell you some of those lines.”

At Instacart, one of my colleagues came up with a framework of where AI should be within the product and our operations. It should be the 3A framework: Assist, Accelerate, and Anticipate. Assistance is pretty straightforward. Acceleration is in the anticipation of that assistance; by assisting, we accelerate. And through that, we can start to anticipate. Anticipate is a little bit harder in our field. I think it’s easier with quant data, but I think we’ll get there.

Jared Forney: I think something that everybody should be thinking about is coming up with and encouraging the use and creation of a charter within your organization for your AI principles. Having so much of this uncertainty comes with having no stake in the ground.

Sam Ladner: I completely agree with that, and I think that’s the thing that we might not be pulling out strongly enough, that this isn’t like an existential dread moment. This is a re-crafting moment. I use the word "re-crafting" when I’m thinking about job crafting, which comes from organizational behavior. You craft your own job. You don’t start with a bunch of duties; you start with desires for outcomes, things you enjoy, and then you work backwards.

If you want AI to be accelerating or assisting you, then go with that and then come backwards and say, "Okay, I don’t want AI to be replacing me as an interviewer because I’m way better at that. I'm a much better conversationalist, I’m much more empathetic, I can build rapport, and I understand turn-taking much better."

There’s a lot of buzz around AI interview tools. What are your thoughts on this, and why would a team want to remove the human element from user interviews?

Sam Ladner: The question you need to ask yourself is: why would anybody want to not have contact with the humans that use their product? That’s the question you should be asking. If you're gonna auto code your interviews, as the interviewer, you need to be good at sprinkling keywords in a way that doesn't make you look weird, you know? Just get the AI to do the stuff that it does best.

Jared Forney: I see a lot of concern from the research field about, “Is this going to replace me?” The thing is, we’re seeing an evolution and it underscores the absolute need for the researcher in this role. For all the great analysis and summarization tools, none of that exists without the interview, without the researcher, without the participant. It sits in an empty vacuum. So if anything, it only underscores the absolute importance of the researcher in ensuring that we have good data coming in from the start.

How do you see the researcher’s role evolving as you vet and implement AI tools, especially when you’re the guardians of customer knowledge?

Kaleb Loosbrock: The way that I’ve said it before is we are the arbiters of truth, and I think that’s very true in this day and age—the accuracy of AI and holding it accountable, making sure that it’s actually factual, which is incredibly difficult to do. Historically we’ve been very tied to our craft. We’ve been operators. We execute and we know how to code and do all these things, and we pride ourselves on the output, not necessarily creating insights to action.

I think that will shift. In the age of AI, we will go from operator to oversight. We’re going to now start looking at these machines, supervising them, making sure that they’re doing things ethically and responsibly, understanding where their stopgaps are, or where they didn’t quite understand something. I’m already seeing that in my own practice when I’m leveraging AI.

Jared Forney: I think it’s already happening. The two most common words in the vernacular of the business we work in are "impact driven." And then shortly followed by "metrics." When I think about my role, it’s as an arbiter, working in concert with legal, with privacy, with our research team, with our product teams, and with our participants. I think that’s the other thing that sometimes gets lost in the mix—that participant experience. With all the tools we’re seeing, with AI being able to serve as the virtual moderator, how do our participants feel talking to machines?

Let’s imagine we reconvene a year from now. What are your predictions for AI in the research craft?

Sam Ladner: The secret to foresight isn’t asking how this technology is going to change everything. The secret is looking at the multi-dimensional shifts of change and how they complement each other. So I would put the economic and the political changes alongside this. Where is AI going to go in terms of research? I think economically, we’re going to see many organizations that do not have researchers and will not pay for researchers using AI and using it badly. And we’re going to see a lot of high-profile issues. The fifth AI winter is already here, in my opinion.

Jared Forney: To put an optimistic lens on it, I see an opportunity for co-creation. There is a constant give and take, and there are mistakes that are made both on the AI front and on the human front. I’m hoping that we’ll see that interplay expand out in the market in such a way that we get a better understanding of where AI fits. The most successful organizations will find ways to make co-creation and collaboration more part of AI’s role and ethos in the market. That’s my optimistic viewpoint on it.

Kaleb Loosbrock: My hope is that we will learn to dance with the devil and make it our dance partner. We’re gonna learn how to work this tool better than anybody else because we’re researchers and that’s what we do best. We understand how to learn and how to adapt way better than anybody else that I know of. And that’s why I love being in research. I love learning. Simple as that.

Discover more panels, talks, and sessions from Insight Out 2025 on our YouTube channel.

Interview questions and answers have been edited for clarity and length.

Keep reading

See all

A whole new way to understand your customer is here

Get Dovetail free

Product

PlatformProjectsChannelsAsk DovetailRecruitIntegrationsEnterpriseAnalysisInsightsPricingRoadmap

Company

About us
Careers3
Legal
© Dovetail Research Pty. Ltd.
TermsPrivacy Policy