Working in a large organization with over 100+ employees? Discover how Dovetail can scale your ability to keep the customer at the center of every decision. Contact sales.
UX research is a critical part of creating truly user-centered products. Data gathered through the research process can inform decision-making, lead to more satisfying and relevant projects, and boost business success.
But what happens if you notice red flags while you’re looking through a data set? Perhaps the participant responses are suspiciously similar, the insights simply don’t match the data, or your entire cohort lacks diversity.
Those are just a few of the pitfalls UX teams might find themselves up against. This guide will help your team avoid some of the common challenges in gathering data that’s valid, reliable, and beneficial to your customers.
Use Dovetail to analyze all your customer interviews and uncover hidden UX issues that are costing your business every day.
Analyze with DovetailData quality is of critical importance in UX research. Whether or not you gain accurate data impacts every aspect of the research process, as well as the reliability, usefulness, and accuracy of insights.
Here are some of the key areas that data quality impacts:
Insight accuracy: gaining correct and reliable insights is impossible without accurate data. Data that is skewed, biased, or gathered incorrectly will lead to false or even exaggerated takeaways. This can impact the entire project.
Solving the right problem: solving the wrong problem can lead to issues in organizations. Without reliable data, your products might address challenges your customers don’t have instead of those they do have.
Making the right decisions: the decisions your team makes (e.g., to release a new feature, focus on a certain market segment, or produce a new product) must be based on high-quality data. If not, decisions are more likely to end in failure.
Long-term success: relying on low-quality data not only impacts the project at hand, but it can also have a broader long-term impact on the organization as a whole.
It’s not uncommon to spot things that don’t add up when looking through a data set. Imagine you’re looking through notes from a focus group and it appears that every participant had the exact same feedback and sentiments. Or perhaps you’ve deployed a survey and a user has checked option A for every single answer. In another scenario, you might notice that your data set has no diversity.
Detecting these errors, spotting bias, and reacting accordingly is essential for maintaining data quality.
Here are some red flags to look out for:
These occur when a participant hasn’t answered honestly, or an honest participant responds inaccurately.
It’s impossible to weed out every case, but looking for faulty response patterns that stand out is a good step.
For example, a participant answering the same response choice for every question could be a red flag. Another red flag is someone putting one-word answers for open-ended questions or copying and pasting answers.
Identify and remove these responses from the overall data set whenever this occurs. Consider reviewing the research study’s design if you notice a trend of unreliable responses. It might be too confusing or long.
Your team should come into a research project with an open mind instead of fixed ideas. If there’s a desired outcome, this can heavily skew the results and lead to confirmation bias—the tendency to accept ideas that align with the desired outcome even if they don’t represent the data.
Diverse views ensure your insights are reliable and applicable to a broader user base. Information from a narrow group will likely skew the data and not accurately represent the market.
Creating data entries accurately is of the utmost importance. Manually entering data or collecting data in multiple formats can make this process tricky and unreliable. Various tools can increase accuracy and speed up the data entry process.
Data quality doesn’t just matter during the collection process; it’s also critical during measurement and analysis. Manual processes won’t usually cut it when it comes to large data sets. As with data collection, advanced tools can help prevent mistakes.
Asking the right questions can help you avoid unhelpful or inaccurate responses. This is true whether you’re deploying a survey, conducting an interview, or running a focus group.
Here are some ways to design effective questions:
All useful projects start with clear, measurable goals. When writing your survey or interview questions, link them back to the overall goals to keep the project aligned and on track.
The goal of asking your participants questions is to get into their heads, not confirm what you want or expect to be true. Accurate and reliable data occurs when you ask questions neutrally without suggesting the answer.
When participants feel overwhelmed, they might answer in ways that don’t align with their thinking. That’s why it’s essential to ask one question at a time, allowing the person to consider their answer carefully.
Try including a red herring question with a strange answer in your survey. This will help you weed out poor data from distracted participants who are trying to get through the survey quickly.
Keep your questions concise, clear, and free of technical language to avoid confusion and misunderstandings. The more conversational, the better.
Closed questions are easier to calculate and quantify, but open-ended questions give the data nuance and color. A mixture of both can ensure you gain both logical and unexpected insights and context from participants.
Checking the quality of the response in an open-ended question can help identify poor data. Look out for one-word answers or responses that don’t make sense (in comparison to thoughtful, logical responses).
In addition to asking the right questions, there are other measures you can take to ensure your data is reliable and valid.
All researchers should keep the following in mind:
While you might gain responses from multiple sources, using consistent ranking scales is essential. This is especially true for closed questions. Using ranking scales will ensure your data is easier to analyze and draw insights from.
Thinking about bias at the outset will help you eliminate it as much as possible from your project. You can take steps to remove bias by ensuring there isn’t an expected outcome, not asking leading questions, and avoiding biased samples by working with a diverse participant cohort.
Collect data through a standardized tool. This will ensure your measurement process is accurate and efficient, making the analysis a breeze.
Addressing any data issues at the outset is important. This means highlighting them before any conclusions are drawn or business decisions are made.
This can be performed by:
As you complete the analysis process, flag any data that looks out of place or potentially flawed.
Don’t make assumptions too quickly. Instead, take a good look at any flagged data to understand whether it’s flawed and why. Properly understanding this can help you identify what went wrong and come up with potential solutions.
Consider how missing data and outliers impact the overall project. It’s helpful to determine whether the missing data is integral to the project or impacts the validity of the findings. You might have to re-do the research or fix those entries if this is the case.
You can perform fixes where an incorrect data entry has been made or a response is clearly incorrect. Usually, this means removing the low-quality entries to prevent them from skewing the results. In some cases, you might need to validate the responses and recheck any potential errors.
Avoid performing fixes or making changes without detailed notes to ensure all parties are aware of the edits and the reasoning behind them. This avoids work happening in silos without explanation.
Gaining insights that can lead to positive actions relies on the data analysis process. Once accurate data has been collected, it should be analyzed correctly to draw helpful conclusions that will lead to better products and happier customers.
Accurately analyze your data with the following steps:
Things like missing values, data that raises red flags, inconsistencies, and outliers ought to be excluded from the larger data set before an analysis is performed (if they haven ’t been addressed already).
Once the data is cleaned, choose an analysis technique that aligns with your project and overall goals. It should also align with the data you’ve collected. Techniques such as descriptive statistics, user journey mapping, and thematic analysis are just a few of the techniques you might choose to organize the data.
The right tools can speed up the process and ensure you gain accurate and reliable insights. With Dovetail, for example, teams can have all their customer data in one place. The tool allows them to tag themes, segment data, uncover patterns, and share insights to action quickly across the business.
You should be unbiased from the beginning to the end of the project in order to stay objective and avoid coming to conclusions too quickly. Stay curious to see what story the data tells and what you can learn from the research process. Don’t just wait for what you hope it will reveal. The results may be surprising—that’s why this process is essential.
Evaluating the trustworthiness and credibility of all research findings can help you maintain high-quality data and get reliable insights. It doesn’t matter whether or not your team performed the research. This keeps data integrity high and avoids reliance on research that doesn’t tell the whole story.
The following questions can help you evaluate the trustworthiness of research:
Were the research methods used relevant to the project?
Is the data accurate?
Has the data been cleaned? Have outliers and missing data been removed?
If you performed this research again, would these results likely be replicated?
Did the results come from a diverse cohort of participants?
Did the process involve any coercion or bias?
Is the data set large enough to be statistically significant?
Are there missing aspects that require additional research?
Are you relying on secondary research that has been validated and peer-reviewed?
Were there any conflicts of interest that need to be addressed?
Have you documented your findings, processes, and data edits?
A critical aspect of UX research is ensuring data quality is high. This ensures that any research has reliable and meaningful results and is free from bias and inaccuracies.
Following best practices can help you avoid common pitfalls and keep your research’s integrity and validity high. Practices like recording data accurately, working with diverse participants, following data cleaning practices, and using advanced tools for analysis can help.
Gaining high-quality data can help foster trust in research outcomes. It can also lead to more accurate insights, which you can leverage to create better products—all with the goal of benefiting the end user.
Some of the core best practices for keeping data quality high include accurate data collection, consistent ranking systems, cohesive survey questions, data cleaning practices, and detailed analysis with advanced tools.
The common pitfalls include asking leading questions, having a lack of diversity in the sample group, entering data inaccurately, letting bias influence results, and relying on unclean or low-quality data.
Data that is reliable and valid is collected from a diverse group of participants without bias, accurately recorded, and cleaned. Outliers and missing entries are removed, and the data is analyzed using advanced and accurate tools. These, among other practices, can help ensure data is high-quality and reliable.
Do you want to discover previous user research faster?
Do you share your user research findings with others?
Do you analyze user research data?
Last updated: 18 April 2024
Last updated: 24 June 2023
Last updated: 29 May 2023
Last updated: 22 October 2024
Last updated: 22 October 2024
Last updated: 13 May 2024
Last updated: 22 October 2024
Last updated: 22 October 2024
Last updated: 24 October 2024
Last updated: 22 October 2024
Last updated: 30 September 2024
Last updated: 16 March 2024
Last updated: 24 September 2024
Last updated: 30 January 2024
Last updated: 30 January 2024
Last updated: 24 October 2024
Last updated: 22 October 2024
Last updated: 22 October 2024
Last updated: 22 October 2024
Last updated: 22 October 2024
Last updated: 22 October 2024
Last updated: 30 September 2024
Last updated: 24 September 2024
Last updated: 13 May 2024
Last updated: 18 April 2024
Last updated: 16 March 2024
Last updated: 30 January 2024
Last updated: 30 January 2024
Last updated: 24 June 2023
Last updated: 29 May 2023
Get started for free
or
By clicking “Continue with Google / Email” you agree to our User Terms of Service and Privacy Policy