top of page

Cognitive Biases in User Research

  • Writer: Lynda Elliott
    Lynda Elliott
  • Oct 24, 2019
  • 16 min read

Updated: Sep 24

As researchers it's really important that we understand at least some cognitive theory and psychology. Not only does this affect the people that we're going to be interviewing (and things that we need to watch out with them) but it actually also affect us as user researchers.

Everybody has unconscious biases, and they determine - and predict, in a sense - how we react to things, how we respond to things, and how we do things.

Dr Susan Weinschenk is a behavioral psychologist, an author and a consultant. We talk about the top cognitive biases that affect user researchers - and how to mitigate against them.

User researc

Confirmation Bias

I'd like to discuss the unconscious biases that we suffer from and specifically our ability to conduct interviews and analyse the data.

I think one of the primary cognitive biases we have is confirmation bias. This is when we use the data to validate pre-existing ideas that we might have about the users or their interaction with a product.

How do you think we can guard against this, as practitioners?

You know, I think it's really difficult. All of these biases that we talk about, a lot of times they happen unconsciously, right? So we are not necessarily in control of them, and we may not even realise we are doing it. So it's tough, sometimes, to guard against some of these things that we might do that actually aren't great for the work we are trying to do for our products and our clients.

But in terms of cognitive bias, one of the things that you can do about a cognitive bias is to seek out situations where are you will find out that you're wrong, because this particular confirmation bias has to do with the fact that we tend to only listen to things that match what we already believe.

And so the antidote, so to speak, is to purposely set out to find out that you're wrong. Just have this mindset to say I know that there are some things that I am not seeing clearly. I know that I am doing this confirmation bias thing, so I am going to see what can I do, or I'm going to just throw something out there and see if I can find somebody who is going to say no, that's actually not the way it is. I am going to go and look for some data that does not match what I think should be true, and that I think really helps people put themselves in the mindset.

We run some little experiments in our workshops, and the best way around this is to try and get a “no” answer, instead of a “yes” answer. And think up a question. Challenge yourself. Can you think of something that you can ask or do where the answer coming back is no, you are wrong? So if you force yourself to do that, it's a way of getting over it.

I suppose another way is to kind of be agnostic and not tied to any kind of specific answer either way?

Yes definitely. Again, if you just go with that mindset of I don't know what the answer is, you know, then you are less likely to fall into a confirmation bias.

Man with frame

The Framing Effect

And, I suppose related to this, is the framing effect? That is when we frame our interview questions in such a way as to kind of prepare the user for a specific answer or a specific focus. In doing so, we perhaps lose spontaneous things that might arise. I think maybe asking open questions might be one way of mitigating against this, but do you have any other thoughts on how we can avoid biased framing?

I think what's interesting, if you ask most people who do a fair amount of interviewing, if you said to them do you ask questions that have framing bias, they would go : “Of course not! You know I wouldn't do that!”

And yet we all do. Why? Because again, most of this happens unconsciously. So yes, open ended questions are good.

I think what's really good (when I'm working on an interview protocol, for a product or with a client), I will purposely have a conversation with my client or the team about what are the things they assume are true. And they say, well you know, our customers really like the free overnight shipping, we are known for that. And we know that they like that we are very customer friendly.

And so it's like put all these assumptions onto the table, and then I will say ok, well then we are going to purposely ask questions to see if that's true.

And they are like well no, no, no. Of course that's true. You don't even have to ask!

It's like, no, we're going to ask “what's really important to you?”, or in a case like that, we actually ask a question like that. How would you know that? What makes you feel like this company does care about you as a customer? Rather than just assuming well of course we are considered customer friendly. What does that exactly mean?

And then the answers we got back from that interview were actually quite difficult for my client to hear, because it did not match what they thought.

So to get away from that framing bias, write down what your assumptions are that you believe are true, and then ask questions to find out if indeed that really is true.

Yes, a colleague of mine (and I've also been introduced to this technique with stakeholders) has a great technique that he uses. We draw up three columns. What we know, what we think we know, what we don't know.

And then what we do is put the data into the fourth column, so that we can work with stakeholders. Because quite often as researchers, working with clients or in a team, we are kind of bombarded with the assumptions that the organisation might have, and that can also kind of influence us in a little way.

Yes. It can and I think the things that are the most dangerous is when it is so much an assumption that people don't think it's an assumption. They think it's a truth. Well, it may be, but it may not be.

I think that's one of the values of having someone from the outside work on this, because your whole team have been there awhile, they believe all these things are true, and then someone comes in from the outside and says really? Why do you think that's true? Do you have recent data about the fact that that's true? Or is that data like 10 years old?

And in the meantime, you have a totally different audience, and you haven't realised that that isn't true anymore, or you don't even know about that anymore. So I think that someone from the outside can ask these questions. And the questions really need to be asked. All the assumptions really need to be tested.

Yes I agree with you, and I think sometimes, coming in as an impartial new member of the team, we can become incredibly unpopular quite quickly!

Yes we can cause a lot of trouble!

Brainstorming group

Social Desirability Bias

I believe that we have to be very strong. I believe that we have to be advocates, and detectives, and psychologists and so many different things. But we also have something called social desirability bias, where we really want to fit in, especially if we are brand new, or we are not so confident, or experienced.

How do we deal with the social desirability bias when we are confronted with a group of hostile stakeholders who don't like the data that we are giving them?

The best way I think to deal with this is to take the focus away from you and your opinions, and really move it towards the science and the research. So it's not that I’m saying that this is true or that is true, or I believe this or I believe that. I'm just saying, well what does the data show?

There is that term, “the data sets you free”. And it really does. So what I do is I say hey look, it's up to you what you do with this. I am not usually the owner of the product, I am not in charge of the business, you can decide to do something with this information, do nothing with the information, or think about it. You don't have to take any action on it.

My role is to just point out to you what the data is and what it's telling you, and what your options are. I think that takes it away from the “she thinks this, she thinks that”. My role is I'm going to help you get the data, and you are going to decide what decisions to make from that data.

One of the things I often say is “evidence is your best friend”. I think you raised a really interesting point about this. It's not my opinion. This is the evidence that I am presenting to you, and as a researcher, I'm detached from the outcome.

Yes I think it's very important. I teach people to change their sentence structure. Instead of saying “I think that”, it's like well, “the research shows that”, or “your data that you collected is this”. So it's not about me at all.


My likability, how much people agree with me, conflict avoidance, doesn't have anything to do with it. I'm a scientist. My role is to give the most valid, reliable data I can get. if you want expert opinion on that afterwards, I can always throw that in. But that's up to you.

Woman with crystal ball

Predicting Future Behaviour

That's really good advice. Do you think human beings are bad at predicting future behaviour?

Yes! They are very bad at predicting their own behavior - and the behaviour of others. They are bad at predicting how they will feel about something. We have a tendency to think that whatever we feel or believe now, that's what we going to feel or believe or feel or act in the future. But it's not always the case. We tend to underestimate some things and overestimate other things. So in general we are pretty bad at that.

It's so interesting. That's why you have the data and the actuarial tables and all of that, and people go well no, I wouldn't do that. And it's like well, most people do it, and so you probably will do it even though you don't think you will.

I never ask people to predict future behaviour. So one of the ways that I try, by stealth, to get a sense of how they might do things in the future, is I ask them “how have you done that in the past?”

But, then we have also got another bias, which is hindsight bias. And a kind of cognitive dissonance, where people might exaggerate or misinterpret, or put a layer of something over past experiences or past actions. What are your thoughts about that?

I absolutely agree with you. We are starting to get neural measurements, and I think that helps us in terms of what's really going on. I think if you ask people why did you do that, or what do you think it will do, why do you think you'll do that? You really have to understand that the answers they are giving you are guesses, and probably not very accurate.

Even if you ask someone what they did, because memory is not very accurate either, right? Our memories are infallible. So, really, you can see what they do, you can measure current behavior.

They did make that purchase, or they didn't. They did push on that button or they didn't. That's pretty accurate. And that's about it.

So you can listen to them talk about why they did something or why they might or they might not, but I think you have to really take all of that with a lot of skepticism.

Man with data taking notes

Clustering Bias

Yes that's a very good point. Do you think when we are doing our analysis of our data, there's something called clustering bias, where we kind of start to perhaps see patterns where they may or may not exist. How do you think we could deal with something like that?

Brains just automatically see a pattern where there isn't really a pattern. We are really seeking that out. That's why people see faces in toast and in the clouds. It's like why are you seeing that? Oh it's because your brain really wants to make patterns. And it's very adaptive, but we are pattern makers.

In this case, it really can cause problems. So what do you have to do, it's so interesting being a researcher, you know the discipline that's required, and that's why you have methods and processes in place, so that you will follow.

There are these rules that you follow when you do research like this. So one rule is you do not come up with any conclusions. You do not analyse data. You do not change the protocol part way through the study. You do not.

If you think you see a pattern (and it's so interesting if you are a researcher and you are working with people who haven't done this before), you do your interviews or your study with the first three or 4 people.

And someone else who is part of the team or is a client thinks they see a pattern. And they actually want to run and change the prototype. They want to change the protocol of the study, based on what they are seeing.

So what I do is I make us all agree ahead of time. I'm in the middle of doing this right now - this morning before this meeting, I was working on a test protocol for a client. This is what we are going to do. We all agree, this is what we are doing, this is what we are testing come here or the people we are testing. And everyone has to agree with that upfront. So it doesn't matter what happens.

You know we run a pilot to make sure there isn't something broken, or some of our instructions aren’t confusing and that kind of thing. But once the pilot is run and we've made any changes, we don't make any changes after that until we are done with the data collection.

So that's one thing. You must follow those rules. You don't change the protocol. You don't change the study because 4 people did this or did that. You just have to keep telling yourself, and the whole rest of the team, as you collect the data.

You collect the data. You put it aside. And when you start on the next person, it's like a clean slate. You just have to keep telling yourself you know this is one data point. This is one data point. And I am not going to do any of this patterning until we are all done.

And then the other thing that you do, is you really need more than one person to analyse the data. Because if you are putting your pattern on, you may not realise that. But if you've got two or three people who are analysing the same data, and then we all have different patterns that we are applying. We come up with different conclusions. We can look at each other and say well what data it did you base it on?

So then we can go and look and we can realise so I guess not everybody really did that. It was just three people did that. You know it was the first three. And I like the pattern. And I kind of ignored the other 10 people. So if you have other people, they probably won't have the same pattern as you do. Therefore you'll catch each other's inaccurate patterns.

Analysing data

A lot of times researchers work in an Agile team. In sprints, where you have to do your analysis and your research and everything very very quickly. Sometimes a week, sometimes two weeks. It's quite difficult to have the space to do that and the time to do the analysis.

I wonder, when you are in that situation, how much rigor you can put into that situation, to ensure to ensure that you aren't inadvertently seeing patterns in small groups? Then they want to feed it into the development team, and they want to feed it into the product. And then on to the next thing. Do you suggest that what we do is that we only test very small parts of a product in a sprint? And only that?

Yes I think it can be a real problem. The only thing that tends to mitigate against that, is because you are doing a lot of testing, typically. Then you are doing another little one, and then another little one, and then another little one. So if you have misdiagnosed something over here, you will still come back to it, because that problem will still be over there. So then you might catch up at another time.

If you were doing one test - and just one test - there is stuff you could get wrong and there is stuff you could miss. But if you are doing a lot of testing, chances are if there is a big issue, it will show up again. So that is one thing that mitigates that problem.

I think you have to focus on small things at a time. You should be willing to retest because they've changed things. Well, we tested the navigation a month ago. Yes, but since then, things have changed. You may not realise that one change over here could change something else. Retest. Do some of the same tasks that you tested before. Retest later, and see if anything has been broken in the intervening time.

I think you need to think small and be willing to be redundant.

Diverse group of people

User Diversity

Another thing that I have come across in my travels as a researcher, is where there isn't a diversity of users. So for example, I was in an organisation where they have got a particular group of users who are highly engaged with the organization. When I started asking for people to recruit, they put their “best kids” forwards first.

I had to struggle to find the disenfranchised users, or low digital confidence users. In the end, I identified them through age. Because often people over a certain age aren’t digitally confident. This was borne out in the subsequent research.

I think that that's another issue that feeds into clustering, is that if we are only speaking to the polished, shiny, happy version of the user, we are not going to get authentic feedback across the board.

I think this really has to do with the whole mindset around testing. What I tell my clients when we start on a testing project together, is that a good test, a successful test, is not a test where you found no problems, and everybody loves the product. That's not a successful test. To me, that's a failure.

A successful test is we found problems. Because there are always problems. We have found them. We know what's causing them. We even know what to do about them now. That's a successful test.

So you really need to change your mindset and go in with, there’s stuff that's going to happen here that's not what we expect. It's probably not what some people want. I don't have the “want/don't want”, because I'm the outside, unbiased researcher. But the client definitely has “wants/not wants”.

But we are probably going to find stuff that makes people unhappy, it's not what they were hoping for. It's not what they expected. That is a good thing. That's why we are doing the test. So I think if you go in with that mindset, that can really help you.

Well we want to find the problem, so why would we just tell test the people that we know love us and love the product and experience?

In design thinking they talk about making sure that you interview the extremes. It's interesting, in usability testing, traditionally you always talk about the representative user. Of course that’s good, but in design thinking, you also then want to go and talk about the people at the extremes.

For example let's say you were designing a new kitchen tool, you know like a hand mixer or something. Your target audience is people between 25 and 40 who don't cook a lot, or are thinking about doing some more cooking at home, and that’s your target audience. Well, absolutely - go interview them and talk to them.

Children cooking

But now go interview a chef, who is an expert. Go interview a 70 year old woman who has cooked for years. Go interview a 70 year old woman who has never cooked. Go try this out with an 8 year old kid - as long as it's not a sharp knife!

See what happens when they try and use it. It's an interesting idea. Because we usually test with representatives users. But when you get those extremes, then you are going to get insights. Things are going to come to you. So, we know they are not the typical audience, but it's interesting.

If the chef can't figure out how to do such and such, if the 8 year old kid can't figure out how to do this, then maybe that's something we need to look at.

Whether we are a designer or a product manager or even a researcher, we can never anticipate the use cases. I've had some absolutely astonishing findings.

I was working on a product that was to help people with high cholesterol and high blood pressure. One of the people that I recruited was very much in that user group. She had suffered a mild stroke. She had some mild cognitive impairment. She had to register on this website. It was a very long form.

When she got to the fold line, and what she thought was the last question, she just pressed return and she couldn't progress through the form. We let her try and error recover. We gave it some time. In the end, we had to intervene.

What we discovered was that she had never used a scrollbar before! It would never have occurred to me that this might be a use case. I think testing with outliers and really putting that whole spectrum into your group of users to speak to, it's a brilliant idea.

So do you think there are any other biases that we might need to be aware of as researchers?

Man looking at designs

The IKEA Effect

My goodness there are so many of them! I think there's something I read where they are like 121 cognitive biases. One of the other ones I would want to mention is what's called the IKEA effect. Do you know that one?

No, I don't.

It's based on the store the IKEA store. The idea behind the IKEA stores is that you go and you buy the furniture in a box and you put it together yourself. There's been a lot of research done on this, which is that when someone is involved in building something, or testing something, if they put time and energy into it, they become invested in it. They like it more. They are more attached to it.

That sounds like a wonderful thing, and it is, if you are trying to get someone to buy and spend money on something. But if you are a researcher, and you have done the design, you are the designer, and now you're testing your own design.

Now we have a problem. Because you've invested. We've all seen this if we've coming to test a design that's not us, that somebody else tested, that they just have blinders on. You look at the product and you go well I'm not sure about that. Maybe we should change that. And they all go oh no that's really good. Because they designed it.

So if you are testing your own design, or even if you worked on a piece of it, you have to be very careful. You are going to be inordinately attached to the design.

When I've worked with clients about how to set up research in the organisation, I suggest that they have researchers who are not designers, or the researchers on that project are not the designers on that project.

Now, it's certainly possible to test your own design. It's possible. But if you're going to do that, then you really have to understand the IKEA effect is operating. You have to be especially careful to be willing to have criticism about what it is you've designed.

Yes. And to be challenged perhaps by your team. I will point my readers in the direction of other websites where they can see the whole list of all the cognitive biases.

Thank you very much for your time.


Resources

More reading on the subject.

If you’d like to follow Dr Weinschenk on social media, or read her books, here are the links.

Comments


bottom of page