How and when do UX researchers use AI?
According to the 2023 State of User Research Report, 20% of researchers are currently using artificial intelligence for research, with another 38% planning to join them in the future.
With the majority of researchers either using or planning to use AI at work (as of May 2023, when the State of User Research survey was conducted), we wanted to learn more about the role of this new technology in our industry.
So in August 2023, we sent out a short survey to learn:
- Which AI tools are researchers currently using?
- Which aspects of the research process are researchers automating?
- What ethical concerns do researchers have about AI?
- What guardrails do researchers have in place to ensure ethical use of AI?
- What benefits and shortcomings are researchers experiencing with AI?
In this article, you’ll find an overview of our discoveries about the role of AI in UX research, based on 1,000+ survey responses from User Researchers, ReOps Specialists, and people who do research (PwDRs).
How to use this report
- UX Researchers: Use this report to understand how your peers are approaching AI, educate yourself about its potential pitfalls, and explore the most popular AI tools and features (spoiler alert: some of the tools you already use might have AI features you’re not aware of!).
- Research Ops Specialists: Learn how research teams are improving their efficiency and impact with AI, and discover guardrails to put in place to ensure safe, ethical use of automated intelligence in research.
- Other curious readers: Use this report to stay on top of one of the most complex and controversial conversations in the UXR industry today and discover what AI can and can’t (or should and shouldn’t) do to level-up your research.
Methodology
The AI in UX Research Report was created by Product Education Manager (former Content Marketing Manager) Lizzy Burnam, with strategic guidance and support from Lead UX Researcher Morgan Mullen, Senior Data Scientist Hayley Johnson, Career Coach and UX Research Leader Roberta Dombrowski, Head of Creative Content & Special Projects Katryna Balboni.
✍️🎨This report was authored by Lizzy, and brought to life by Illustrator Olivia Whitworth.
We used SurveyMonkey to create the survey (with 3 screener questions, 10 single- or multi-select questions and 2 open-response questions, to get a fair mix of quant and qual data) and distributed it to User Interviews audiences via our social media pages, our weekly newsletter, and a banner on our website. We also shared the survey in relevant UX research communities on Linkedin, Facebook, and Slack.
Between August 14th and 18th of 2023, we collected 1093 qualified responses from User Researchers, Research Ops Specialists, and people who do research (PwDRs) as part of their jobs. (An additional 1091 people took our screener but did not qualify for our survey based on their responses.)
In most of the questions, we gave participants the option to write-in an “other” response. Where applicable, we omitted redundant answers (e.g. when the response would be covered by a future question) or merged them with existing categories. For the two open-response questions, we eliminated exact duplicates and comments without meaning (e.g. “asdfghjk”).
Audience
More than half of our audience self-identified as User Researchers (30.4%) and Product/UX Designers (20.8%). Only 5.9% of our audience identified as Research Ops professionals—we were surprised by this relatively low response rate, since ReOps is typically heavily involved in exploring the value of new technologies like AI for their team. But perhaps we just weren’t as successful in reaching this audience.
Write-ins for the “other” category included Founders/CEOs, Content Designers/Strategists, Developers, and CX/UX Strategists or Consultants.
Zoom in or open image in a new tab to view a larger version.
Our findings
Our main takeaways from the survey include:
- 📈 77.1% of the researchers in our audience are using AI in at least some of their work.
- 🧰 The most-used UX research tools with AI-based features include HotJar, Dscout, Glassbox, and Dovetail.
- 🤖 ChatGPT is the most widely used AI-specific tool, with about half (51.1%) of our audience saying they use it for their research.
- 🧲 About a third of researchers are using AI for document signature collection (34.5%), scheduling (32.5%), and screening applicants (31.6%).
- ✍️ Nearly half (47.8%) of our audience said they use AI for transcription.
- 🧑💻 Qualitative coding is the most popular analysis use case for AI.
- 📊 A plurality (45.5%) of our audience is using AI to help with writing reports.
- ⏩ Efficiency is the most-cited benefit of AI—but researchers still have reservations.
- 🛑 Despite the benefits, some researchers believe that AI’s current shortcomings may be too acute for the tools to be truly valuable.
- 😓 Researchers’ biggest concern about using AI is the potential for inaccurate or incomplete analysis.
- ❌ Nearly half (49%) of our audience is limiting the type of data they use with AI.
- ⚠️ UX Researchers seem to be the most cautious segment regarding AI.
We’ll dive deeper into these takeaways below.
📈 77.1% of the researchers in our audience are using AI in at least some of their work.
Obviously, not everyone’s using AI in their research—but among those who are, how often are they using it?
Note: There is probably some self-selection bias in our data for this question; because our survey was focused on AI in UX research, we likely attracted a higher proportion of folks who are open to using AI in their research than would be present in a general population of researchers.
That said, here’s what our audience said about how often they use AI.
Zoom in or open image in a new tab to view a larger version.
The majority (77.1%) of our audience said they’re using AI in at least some of their research projects, while a minority (22.9%) said they’ve tried AI but don’t regularly use it in their work.
An additional 8.0% said they have never used AI. Since our goal with this survey was to learn more about how researchers are using AI, we filtered these folks out for the rest of the questions.
🧰 The most-used UX research tools with AI-based features include HotJar, Dscout, Glassbox, and Dovetail.
If you’ve reviewed your tech stack recently, you’ll notice that many UX research tools have robust AI features built into their platforms. In our survey, we distinguished between these UXR tools and AI-specific tools because some folks may use the AI features in these popular UXR tools (either knowingly or unknowingly), and we wanted to capture this usage as well as the intentional use of AI-only tools.
More than 10% of our audience said they’re using the AI features in the following tools:
- HotJar: AI-based survey builder and/or insights generation (23.6%)
- Dscout: AI-powered analysis and insights generation (20.4%)
- Glassbox: AI-driven visualization and/or analytics (20%)
- Dovetail: Automated clustering, summarization, and/or sentiment analysis (19.8%)
- Notion: Writing, editing, and/or brainstorming (19.7%)
- Grain: AI-based meeting notes, summaries, and/or workflows (17.6%)
- Indeemo: AI-powered prompts, character recognition, and/or analysis (16.3%)
- UserTesting: AI-powered insights generation (14.5%)
- Lookback: Automated analysis, note-taking, and/or insights generation (14.1%)
- Optimal Workshop: AI-based analysis and/or data visualization (14%)
- Qualtrics: AI-powered analytics (13.8%)
- Maze (13.2%) — Note: We incorrectly listed Maze’s AI features as automated analysis and reports. Maze’s actual AI-based capabilities include asking follow-up questions, rephrasing questions, renaming tests, and summarizing interviews. This discrepancy may have affected the number of folks who reported using Maze.
- LoopPanel: AI-based transcription and/or note-taker tools (13.1%)
Participants were asked to check all the tools that they use, so responses were not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
A small portion of our audience (4.2%) said they aren’t using any of these tools. About 2.8% of folks provided write-in responses for tools such as Miro, Zoom, Great Question, Google/Microsoft, Condens, Trint, Notably, and Grammarly.
✉️ P.S.—If you’re interested in UX research tools, then you should know we’ll be publishing a 2023 UX Research Software & Tooling Report very soon. Subscribe to our newsletter, Fresh Views, to be notified when that report drops!
🤖 ChatGPT is the most widely used AI-specific tool, with about half (51.1%) of our audience saying they use it for their research.
As one of the newest and most advanced AI tools, ChatGPT has garnered attention in almost every industry, from academia and advertising to entertainment and education (and even US courtrooms!).
Given this widespread buzz, we weren’t surprised to see that the majority (51.1%) of our audience said they use ChatGPT in their research. That’s more than 3x the number who said they use ClientZen, the second-most popular AI-specific tool in our survey, which is used by just under 15% of our audience.
A little over 5% of our audience wrote-in additional tools. Common write-ins included Bard, CoNote, Claude, Azure, TL;DV, Fireflies, RewatchAI, and Read AI.
Notably, 5 write-ins said they’re using internally-built/proprietary AI tools. In a later question, some people listed proprietary tools as a guardrail for safe, ethical use of AI in UX research, since they don’t allow public access to the tools or data.
4.1% of our audience said they aren’t using any of these tools. And despite the popularity of ChatGPT among our audience, some responses to our later question about AI’s limitations noted serious concerns about the role of ChatGPT (and language learning models in general) in UX research:
“I am very skeptical about the role that AI should directly play in research because it violates the core epistemologies (especially reproducibility) that underlie qualitative methods[…] LLMs are probabilistic agents and the fact that the same dataset can get a different analysis each run through the model makes it an inappropriate tool for the job.”
📚 More reading: 20+ Powerful AI-Based Tools for UX Research Toolkits in 2023
🧲 About a third of researchers are using AI for document signature collection (34.5%), scheduling (32.5%), and screening applicants (31.6%).
Recruiting-related tasks are notoriously one of the most painful aspects of a researcher’s job. In fact, 97% of researchers reported experiencing challenges during recruitment in the 2023 State of User Research Report.
(👋 That’s why User Interviews exists: to connect researchers with quality participants quickly, easily, and affordably).
Where there’s pain, there’s potential for the right technology to provide relief—but is AI the right tool for the job? Let’s look at what the data said about AI’s impact on recruiting.
Participants were asked to check all the tasks they use AI for, so responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
According to our survey, about a third of researchers say they use AI for signature collection, scheduling, and screening applicants.
(⏸️ Speaking of scheduling—did you know that User Interviews just revamped our scheduling feature for moderated research?)
In general, though, it seems like AI hasn’t influenced recruitment quite as much as the other aspects of research; the percentage of folks who said they do not use AI for recruitment (24.2%) was significantly higher than those who said the same for conducting research (4.7%), analysis (8.8%), or synthesis (9.6%).
There are different potential reasons for this—maybe folks don’t trust AI for the delicate tasks of recruiting, screening, and approving applicants. Or maybe there just aren’t as many well-developed AI-powered solutions for recruiting.
Whatever the reason, we know from the 2023 State of User Research Report that researchers use multiple methods to recruit participants—on average, 3 methods for recruiting customers and 2 methods for recruiting external participants. Perhaps, in the future, AI might emerge as another top recruiting method, but for now, the most popular methods include email, intercept surveys and self-serve recruiting tools like User Interviews.
This is a comparison of responses from the questions on recruiting, synthesis & reporting, analysis, and conducting research, for which the responses were not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
We also got a handful (0.8% of our audience) of write-ins. Most of these mentioned using AI for writing/editing recruitment assets and different aspects of panel management, such as segmenting audiences and updating participant data.
📚 More reading: 17 Research Recruiting and Panel Management Tools
✍️ Nearly half (47.8%) of our audience said they use AI for transcription.
Effective note-taking during research sessions is tough, especially when you’re trying to be an attentive listener and moderator at the same time. It seems that this is one area where AI has offered a lot of help, with 47.8% of our audience saying they use AI for transcription and 40.8% saying they use it for note-taking.
Participants were asked to check all the tasks they use AI for, so responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
2.5% of our audience also wrote in other ways they use AI to conduct research, including:
- Generating interview questions/scripts
- Sparking new ideas and creativity
- Detecting risk or bias
- Creating prototypes
🧑💻 Qualitative coding is the most popular analysis use case for AI.
Qualitative coding is one of the most complex and time-consuming types of UX research analysis—so it’s no surprise that a plurality (40.4%) of our audience is using AI to streamline the process.
Researchers were asked to select all of the analysis tasks they use AI for, so responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
However, it sounds like AI’s analysis capabilities still leave something to be desired. A handful (0.5%) of write-ins said they’ve experimented with AI for analysis but found it wasn’t very helpful, or that they could do it more quickly and accurately on their own. Although the number of such responses is small, these sentiments align with many of the responses we received about AI’s shortcomings in a later question.
For example, one respondent said:
“I tried qualitative coding and it was a disaster. Worse - it looked right for the first 10 minutes and then turned out to be very, very wrong.”
📊 A plurality (45.5%) of our audience is using AI to help with writing reports.
Given our audience’s high self-reported usage of ChatGPT—a large language model known for its ability to generate smart, textual responses to prompts—it may come as no surprise that a large percentage of our audience (45.5%) reported using AI to help them write reports.
Participants were asked to check all the tasks they use AI for, so responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
A small percentage of our audience (0.5%) used the “Other” field to tell us they are using AI for editing, data clustering/mapping, and summarization.
📚 More reading: 31 Creative UX Research Presentations and Reports – Templates and Examples
⏩ Efficiency is the most-cited benefit of AI—but researchers still have reservations.
We’ve learned that researchers are using AI in their projects, and we’ve explored the different areas of research that they’re using AI for. Now, let’s uncover why they’re using AI—what benefits does it offer?
This was an open-response question. Themes that came up in fewer than 1% of responses have been omitted. Zoom in or open image in a new tab to view a larger version.
In open-response comments describing the benefits of AI, the terms “efficiency,” "time savings," "doing more with less," and related phrases came up the most often (in 28.4% of comments). For example, one person described valuable time savings for their research team of one:
“As the only UX'er in my company, my research projects often span a longer timeline than I'd like (out of necessity). Plugging my session notes into ChatGPT to quickly generate commonalities or using Notion to summarize session transcripts inline gives me a good starting point to dig into my analysis.”
Folks also had a lot to say about AI’s creative support, with one researcher describing themselves as having a “thought partner relationship” with AI. AI seems to help many researchers “just get started” when tackling a blank page:
“If I'm putting off getting started on drafting a research plan, recruitment email, or coding data I'll let AI take a stab at it because it spits out stuff instantly! Then I'm editing instead of generating which feels easier to get started. Most of the time I have to do 95% of the work over, but getting that start is valuable. It also helps me check my initial ideas by [comparing them with] what AI thinks.”
Others said that AI helped them bridge various types of communication gaps, from communicating across cultures and languages to collaborating across teams:
“By far the most valuable part of AI tools for me is being able to translate terminology and requirements from stakeholder language to researcher language, and back again, to help bridge communication gaps.”
This same sentiment was echoed by another researcher, who also said that AI-based insights seem to improve their perceived credibility among stakeholders:
“Stakeholders also tend to believe [my analysis] more if the AI finds the trends over the humans, which is a bit annoying as a researcher, but is gratifying to see the work being accepted by the client.”
Despite these reported benefits, almost everyone seems to agree that AI is not perfect. Many people were careful to include qualifiers about the extent of these benefits and their potentially hidden costs. For example, some researchers mentioned the need to check AI outputs and the dangers of assuming that AI is correct:
“AI tools (LLMs mostly) are useful at processing large volumes of words for things like sentiment analysis and other forms of pattern detection. I've tested a couple of tools that purport to do this with research material but have not had a great deal of success. The tools I've tried have been incomplete and inaccurate to a degree that I question whether I'm saving any time or effort.”
Let’s dive deeper into the limitations of AI in the next section.
🛑 Despite the benefits, some researchers believe that AI’s current shortcomings may be too acute for the tools to be truly valuable (yet).
The researchers in our audience listed increased efficiency as the top benefit of AI. But does this efficiency come at too great a cost?
In an open-response question about the limitations of AI, many people noted that AI’s capabilities are often very low-quality, spitting out inaccurate, surface-level, or incomplete outputs.
Here’s a handful of the words and phrases that participants used to describe AI’s outputs:
- “Hallucinations”
- “Wonky”
- “Absurd”
- “Nonsense”
- “Embarrassing”
- “Really poor (and frankly, hilarious)”
- “Like a junior assistant”
- “Like a seventh grader”
- “Like a glorified Google search”
- “Basically useless”
This was an open-response question. Themes that came up in fewer than 1% of responses have been omitted. Zoom in or open image in a new tab to view a larger version.
One respondent said:
“Accuracy is the number one issue. AI doesn't know the context and it can only get you 70% of the way there. My fear is researchers will fail to understand the why behind their craft if they lean too heavily on AI to do the work for them. Research is not always black and white and you often have to look beyond the face value of things.”
Many people also said that AI needs so much human review (described by one respondent as “hand holding”) to catch errors that its limitations undermine any efficiency gains. Others noted that, although human review may be considered a limitation, it’s also par for the course; “the tools were never meant to function by themselves.”
Several comments also mentioned the danger of inexperienced researchers (such as junior UX researchers or people who do research outside of the dedicated UX research team) putting too much trust in AI, leading to false insights and misguided decision-making.
For example, one person said:
“It's too easy to accept what the AI has suggested as the right answer — especially working with junior colleagues and interns who may not disclose that they're using AI to generate insights and synthesize data. Everyone has to be more critical, and that ends up creating more friction on the team.”
Ethical concerns such as biased outputs and negative socioeconomic outcomes (e.g. discrimination and unemployment) were also a recurring theme in the responses. One person said:
“The inherent gender and racial bias we've encountered in anything more than summarizations or decisions made with strict parameters are absolutely a deterrent to using [AI] for anything more ‘cutting edge’.”
In general, the researchers in our audience seemed to have very low trust in AI, and many wished AI solution providers were more transparent about their data protection policies (or lack thereof). One researcher said:
“AI tools are not transparent enough about what is happening with the data. We need to know EXACTLY where our data is going (who owns the server that the data is being transferred to, ex AWS, Azure, GCP). This is not immediately transparent when interacting with the feature itself. We do not have the bandwidth to engage reps from each of these companies to gather these answers, and sometimes they don't know themselves [...] Making this transparent would greatly alleviate many of our issues.
Others mentioned a desire for specific guidance on how to use AI effectively without risking user data, describing a “learning curve” that limits their ability to try new tools.
A very small portion (2%) of our audience felt that there were no limitations, or that they hadn't used AI enough to notice any.
😓 Researchers’ biggest concern about using AI is the potential for inaccurate or incomplete analysis.
It’s safe to say that folks have concerns about AI.
Anecdotally, you don’t have to search for long on the internet to find opinion articles, social media posts, TikTok videos, and other examples of people raising the alarm about the potential implications of AI on how we work, live, and evolve. Indeed, only 0.5% of our audience (3 people!) said they had no concerns regarding AI in UX research.
When we asked researchers to select their primary concern about AI in UX research from a list of options, we found that for a plurality of researchers (29.7%), the #1 concern is the potential for inaccurate and/or incomplete analyses. After inaccuracy, the biggest concerns were a lack of data privacy (19.7%) and the potential for introducing bias into study findings (14.1%).
We wanted to get a sense of the most pressing concerns, so responses to this question were mutually exclusive. Even so, we still had a handful of researchers write that they were concerned about all of the above. Zoom in or open image in a new tab to view a larger version.
Folks seem least concerned about the prospect of AI eliminating the need for human researchers (only 6.4% selected this as their biggest concern). This may be due to a lack of trust in AI’s current capabilities; in our open-response questions, many folks said they wouldn’t rely on AI without taking the time to manually review its outputs. As one respondent said:
“AI will never replace human centered research. The 'human' part has to go both ways. I don't think AI can truly catch all the tiny aspects that go into a user experience and synthesize them in a way that makes sense or is helpful/accurate. I see it as a tool to help us save time on tedious tasks, but not to be able to do research on its own.”
Note that in order to get a sense of people’s most pressing concerns regarding AI, we required participants to choose only one response to this question. However, 8 participants couldn’t choose, explaining in write-in responses that they were concerned about all of the above. As one response said:
“I can't pick a single primary concern. I have several. And they all matter.”
❌ Nearly half (49%) of our audience is limiting the type of data they use with AI.
Whether you’re worried about it or not, AI is now an intrinsic part of work-life. If researchers are going to use AI at work—regardless of its existing limitations—it stands to reason that there should be guidelines and guardrails in place to ensure the safest, most effective use possible.
So, what kind of steps are researchers taking to mitigate their concerns about AI and its shortcomings?
Participants were asked to check all of the guardrails they’re currently using, so responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
Nearly half (49%) of our audience said they limit the type of data they're using with AI tools. A large segment (38.8%) also said they ensure human review of all AI outputs, while many others (35.5%) said they collaborate with their legal team to stay compliant with AI laws and regulations.
In some cases, researchers felt that these guardrails may be interfering with AI’s potential to provide true benefit to research teams. As one researcher noted in an open response to a later question about limitations:
“The AI Summaries feature could help our team generate high level insights for informal debriefs... However, we currently can't leverage this technology because of corporate guardrails in place limiting our use of AI.”
Of the write-in responses we received (1.8%), several said they avoid using AI in their research entirely, as it’s “the best guardrail there is.” A handful of other folks said they’re still working to determine the right guardrails with their team.
⚠️ UX Researchers seem to be the most cautious segment regarding AI.
Of all the folks who participated in our survey, dedicated UX Researchers seem to be the most cautious about AI. This may not come as a surprise, since UX Researchers are typically trained to apply higher levels of rigor to their research than folks who conduct research as an adjacent part of their job.
When we segmented the data by role, Customer Experience/Support professionals were the most likely to say they use AI in almost all of their research projects (40%), followed by ReOps Specialists (19.4%) and Marketers (17%). UXRs were the most likely to say they've tried AI but avoid using it in their research (44.3%).
Note that statistical significance was calculated using a standard 95% confidence level. With the exception of the 30.7% of Research Operations Specialists who said they use AI in some of their projects, all of these differences were found to be statistically significant. Zoom in or open image in a new tab to view a larger version.
UX Researchers were also the least likely to use AI in their projects across all aspects of research. For example, 11.5% of UX Researchers said they’re not using AI to conduct research, compared to less than 4% in each of the other segments.
The UXRs who said they are using AI to conduct research are mostly (60.7%) using it for transcription.
Responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
The UX Researchers in our audience were also the least likely to have tried AI tools and UXR tools with AI-based features. These results were consistent across all of the tools and use case related questions, which may indicate higher rates of trepidation about AI among folks in the UXR segment.
Responses are not mutually exclusive. Zoom in or open image in a new tab to view a larger version.
More questions than answers
All in all, our clearest takeaway from this survey is that researchers definitely have mixed feelings about the (mostly) uncharted territory of AI.
- 🔎 AI can introduce bias into study findings, but it can also be used to detect bias in interview scripts and analyses.
- 💬 AI can support multilingual communication, but it also falls short in its effectiveness for non-native speakers.
- ⏳ AI can improve the efficiency of common research tasks, but the time required for review may ultimately undermine those efficiency gains.
These are just some of the complex issues and contradictions that researchers brought up in their survey responses.
At the end of the day, it seems that there’s no quick or clear solution for reaping AI’s benefits without risk. As one respondent said:
“AI that automates decisions can scale both benefits and harm.”
So, while we learned a lot from this survey, the results have also opened up a whole new wave of questions for us, including:
- What motivates researchers to choose certain AI tools over others?
- What signals and characteristics can researchers look for when assessing the accuracy, security, and reliability of different AI tools?
- How much do researchers actually value efficiency/speed in research, and do they value this enough to see the efficiency gains from AI as worth it? Does it vary by role, team structure, or seniority?
- Why are fewer researchers using AI for recruiting than for conducting research, analysis, and synthesis?
- How concerned are researchers about AI replacing the demand for human participants (e.g. with tools like Synthetic Users)? Are they concerned at all?
- Do the primary ethical concerns differ among researchers who’ve never tried or used AI tools?
- What resources does the UX research community need to develop the skills and knowledge for effective decision-making about if, when, and how to use AI?
- How are researchers tracking the impact of decisions made based on AI-generated insights?
- Are researchers more or less likely to use AI in their projects in specific industries or company types?
- What can AI tool providers do to overcome trust issues among researchers?
We’ll be chewing on these questions as the UXR industry and AI technology continues to evolve. In the meantime, we’d love to hear from you—what are your thoughts about this report? What questions do you still have? Tag us on Linkedin or Twitter to share your feedback, or email the author directly at lizzy@userinterviews.com.
Keep an eye out for future insights about UX Research
If you found this report valuable, then you’ll probably like these resources too:
- The 2023 State of User Research Report
- The 2023 UX Researcher Salary Report
- Top 5 Takeaways from the 2023 User Interviews Research Panel Report
- How Teams Do Continuous Discovery Research Today, According to Research
- Upcoming Webinars and Virtual Events by User Interviews
- [📣 Coming Soon!] The 2023 UX Research Software & Tooling Report
✉️Sign up for our newsletter to be the first to know when we publish new research about the who, what, how, and why of UX research.