user research – UX Mastery https://uxmastery.com The online learning community for human-centred designers Wed, 22 Dec 2021 13:50:22 +0000 en-AU hourly 1 https://wordpress.org/?v=6.2.2 https://uxmastery.com/wp-content/uploads/2019/12/cropped-uxmastery_logotype_135deg-100x100.png user research – UX Mastery https://uxmastery.com 32 32 170411715 How to Write Effective Qualitative Interview Questions https://uxmastery.com/how-to-write-effective-qualitative-interview-questions/ https://uxmastery.com/how-to-write-effective-qualitative-interview-questions/#comments Tue, 21 Dec 2021 12:08:53 +0000 https://uxmastery.com/?p=109871 Qualitative interviewing is an effective technique to quickly understand more about a target user group. It is a key skill that any aspiring user researcher should develop. It is important to carefully craft the questions to ensure the sessions run efficiently and get the desired information. This article outlines best practice tips on creating effective […]

The post How to Write Effective Qualitative Interview Questions appeared first on UX Mastery.

]]>
Qualitative interviewing is an effective technique to quickly understand more about a target user group. It is a key skill that any aspiring user researcher should develop. It is important to carefully craft the questions to ensure the sessions run efficiently and get the desired information. This article outlines best practice tips on creating effective session guides, ensuring your questions produce great results.

Don’t Ask Leading Questions

A leading question guides the respondent to a desired answer by implying that there is a correct answer. People tend to provide socially desirable answers, so if you ask a question that guides them, they will likely provide one that they believe you want to hear. Leading questions can be used by people to persuade someone. They should not be used when trying to uncover new information or understand an audience. They reduce the objectivity of the session, and therefore, reduce the reliability of the results.

Example:
Leading: ‘Why would you prefer to use our product?’
Better: ‘What are your thoughts about using our product?

In the leading example, it implies that the respondent prefers the product and is enquiring as to why. The respondent may list a bunch of reasons that they like the product but may leave out crucial information where they believe the product could improve. Asking about their opinions and thoughts will provide them with a platform to discuss the product freely.

Example:
Leading: Would you prefer to use the product to improve efficiencies or to gain an overview?
Better: Why might you use this product?

In this example, the interviewer provides two reasons why someone might use a product. The interviewer may have only considered the two reasons why someone may use the product. Simply asking why they may use the product achieves the same goal, but also allows the respondent to consider other options.

To avoid leading questions, act as if you know nothing of the topic. Note down what you would ask if you have no information at all. Keep the questions simple, neutral and free from any words with connotations or emotions. It is also best to have an independent observer assess the topic, as it is easier for them to have an unbiased opinion on the matter.

Behavioural, Attitudinal

People often hold a belief that does not match with their behaviours. Using a mixture of attitudinal and behavioural questions uncovers what a person does, but also their thoughts about their actions. Attitudinal questions are used to understand their opinions and motivations. Behavioural questions are used to find out how a participant does something. It is best to utilise a mixture.

Example:
Attitudinal: How often should you brush your teeth?
Behavioural: How many times did you brush your teeth last week?

Try to keep all behavioural questions about the user’s past, as future behaviours are influenced by opinions and attitudes. It is best practice to repeat questions from a different angle. Don’t be afraid of users repeating themselves or going over a topic multiple times.

Ask Open-Ended Questions Instead of Closed Questions

Open-ended questions are ones that require more than one word to answer. Closed questions result in either a yes/no situation. Open-ended questions are used to find out people’s goals, motivations and pain points. They provide an opportunity for the participant to speak freely on the topic.

Example:
Yes/No: Do you like coffee?
Open: What are your thoughts on coffee?

Closed questions should be avoided unless you want to either clarify to gain more context to the user’s situation. Yes/No questions close down conversations and can be considered as quantitative. The following examples are both fine to use in an interview, as they will put other details into perspective.

Context: Do you drink coffee?
Clarify: You mentioned you drink coffee, correct?

When creating your questionnaire, try and stick with ‘how’, ‘why’, ‘what’, ‘when’ and ‘where’ questions.

Don’t Use Double-Barreled Questions

Sometimes interviewers get excited and want to ask multiple things at once. Double-barreled questions touch on more than one topic. This can be overwhelming to answer, and respondents may either try to answer both at once or answer only one part of the question. If you want to ask something on multiple topics, it is best to split them into two different questions.

Example:
Double-barreled: What do you like about coffee and new coffee products?
Better: What do you like about coffee products?

It is normal in casual conversation to ask questions in such a manner. Interviewing is best when the questions are short and to the point, focusing on one topic.

Differentiate Between Quantitative and Qualitative Questions

Quantitative and qualitative questions both have their own strengths and weaknesses. Quantitative questions are typically reserved for surveys but can be used in interviewing to add some context and allow the interviewer to ask more follow-up questions. They mostly uncover ‘who’ and ‘what’. Qualitative questions will provide detailed information on the topic of interest, uncovering the ‘why’ and ‘how’.

Examples of quantitative questions:

  • Numerical answers: How many coffees do you drink a day?
  • Preferences: What type of coffee drink do you prefer?
  • Single word answers: What brand of coffee do you drink?

It is not immediately obvious and clear-cut the quantitative nature of these questions. You can tell through the low complexity of data gathered. If you ask these questions to participants, you will get a straightforward answer. However, the issue is that the responses are not statistically valid, and require further investigation. You can better use your time in an in-depth one on one session asking qualitative questions such as:

Examples of qualitative questions:

  • Recount your morning routine.
  • Why do you prefer one brand over another?
  • Why do you drink coffee everyday?

Shifting to why and how people do things, outlining goals, motivations, pain points and delights gives a much more in-depth perspective. These insights can be validated later through other techniques, but interviewing is the quickest and easiest way to gather them.

Wrap Up

For qualitative interviewing, there are few clear best practices. Each interviewer has their own way of gathering information and forming questions. The tips above are there to guide you but are not definitive rules that one cannot break. I hope these help to elevate your interviewing process and gather better insights.

The post How to Write Effective Qualitative Interview Questions appeared first on UX Mastery.

]]>
https://uxmastery.com/how-to-write-effective-qualitative-interview-questions/feed/ 1 109871
Wireframes Are Bad… Don’t Use Them https://uxmastery.com/wireframes-are-bad-dont-use-them/ https://uxmastery.com/wireframes-are-bad-dont-use-them/#respond Thu, 02 Dec 2021 02:00:00 +0000 https://uxmastery.com/?p=109839 I failed in using wireframes; that’s why I say that they are bad. I know so many beginners and intermediate UX designers use wireframes in the early phases of the design process, especially in research and usability testing. I used to use them this way, but let me tell you why I don’t use them […]

The post Wireframes Are Bad… Don’t Use Them appeared first on UX Mastery.

]]>
I failed in using wireframes; that’s why I say that they are bad. I know so many beginners and intermediate UX designers use wireframes in the early phases of the design process, especially in research and usability testing. I used to use them this way, but let me tell you why I don’t use them in that way anymore.

What Is a Wireframe?

Before digging deep into this subject, let me try to simplify what a wireframe is. A wireframe is a skeleton for the digital product design. You can imagine it to be the blueprint for any product. It consists of lines and shapes, and each represents some element/hierarchy/structure.

The main three elements of a wireframe are:

1. The Line

This element could represent a frame, a border, or a separator.

2. The Image

This element could represent either an image, or an icon, or any graphics.

3. The Solid

This element represents either a block or a line of text.

Using all these elements in a design can produce something like this:

Now remove these hints from the design and give them to a user asking them to use it. Can you imagine that?

Why Are Wireframes Bad?

I remember one time I was working on a mobile app product for weekly/monthly healthy food ordering subscriptions. The app allowed a user to subscribe and get healthy meals delivered daily. It allowed the user to choose the daily meals and the delivery time. And it also provided health tips and some workout exercises to keep living a healthy life. Here’s a concept that we tried testing out.

Wireframe to evaluate options for placement of icons. Option 1 was at the top left, and option 2 was at the bottom center.

When I tested this concept as a wireframe with the first group of users, I asked each one this question “Suppose your daily meals will be delivered daily at 2 PM and you want to change this time to be at 3 PM, how can you do this?”

I was shocked by the result. 4 out of 5 users tapped on icon no.2, which was supposed to be for the health tips, while just one user tapped on icon no. 1, which was for the menu and settings. This meant that 80% of users expected to change delivery times using the middle button at the bottom. However, I didn’t rely on that result.

To make this clear enough, I created a simple test and iterated the same scenario after applying the UI and providing a finished prototype to another group of users. This time, 5 out of 5 users didn’t like the changing delivery time feature to be placed in position no. 2. The common reason was that they wouldn’t need to perform that daily, and instead, they might need the health tips to be placed at this position since they might need to use this feature more than once a day.


Wireframes come in two forms, digital and paper sketches. The main reason why wireframes were invented was that they were cheap and fast to create, but this doesn’t come free. There’s a hidden cost to that.

1. Cost of Educating

Although wireframes are fast to create and they seem to be a time-saver, they take up too much time in educating, especially in user testing. The time you save in producing them, you pay the double in educating users about them.

2. Misunderstood

What’s the difference between these two elements? Are they both images? Are they both icons? Are they clickable? What should they represent? Too many questions to process in the user’s head may lead to either making wrong decisions or driving them away from the central value.

3. Miss the Whole Experience

Making the user assume that a specific element may serve a particular purpose will not lead to correct research results. When you test with many users, the findings will not be accurate because each one will interpret the shapes in their own way, which will lead you to correct their understanding and guide them back to the right path, which is wrong in research and testing. You should not lead the user.

Feeling and living the whole experience is way better. Making the user live within a semi-real product, feel the interactions, sense the animations, and deal with colors and typography will lead to better and more accurate results.

So, Are Wireframes Useless?

Wireframes are plain, too neutral to be usable by actual users, but this doesn’t mean they are useless. Here is how and when to use wireframes:

1. Guiding You in Your Process.

Design wireframes for yourself to make it easy for you to structure the product you are designing. By creating wireframes, you pour all the imaginations in your head on a canvas or paper faster, organize your thoughts, clearly see where you are heading, and most importantly, iterate more quickly. Know where to place the right elements in the right place, either here or there.

2. Brainstorming and Generating Ideas With Your Peers.

Product owners, UI designers, UX designers, product managers, and developers—all of them can understand wireframes well; they may even add to it, generate ideas, and make a clear direction for the best structure.

3. Flow Design

Because wireframes are plain, you can use them as flow demonstrations with techies and developers. In this way, it will work better than the traditional user flow’s symbols and shapes.

4. Business Owners (Carefully)

Business owners are like users: they can easily misunderstand the wireframe and are time-consuming to educate. But the difference here is that the business owner is just one person, the time cost won’t be as much as educating many users, and in the end, they’re the business owner—you have to keep them involved and in the loop throughout the design process.

Conclusion

Actual users are ordinary human beings. They are not as deep into the technology or the product as you. You must do your best to talk to them in their language, not yours, so it’s better to do your research and usability testing using realistic prototypes instead of wireframes.

Lastly, creating and using wireframes is beneficial if created for people like you within your production environment.

The post Wireframes Are Bad… Don’t Use Them appeared first on UX Mastery.

]]>
https://uxmastery.com/wireframes-are-bad-dont-use-them/feed/ 0 109839
Choosing the Right UX Research Method https://uxmastery.com/choosing-right-ux-research-method/ https://uxmastery.com/choosing-right-ux-research-method/#comments Fri, 26 Jan 2018 05:22:07 +0000 http://uxmastery.com/?p=63885 As more and more organisations become focused on creating great experiences, more teams are being tasked with conducting research to inform and validate user experience objectives.

UX research can be extremely helpful in crafting a product strategy and ensuring that the solutions built fit users’ needs. But it can be hard to know how to get started. This article covers all the basics: from setting research objectives to choosing the method so you can uncover the information you need.

The post Choosing the Right UX Research Method appeared first on UX Mastery.

]]>
As more and more organisations become focused on creating great experiences, more teams are being tasked with conducting research to inform and validate user experience objectives.

UX research can be extremely helpful in crafting a product strategy and ensuring that the solutions built fit users’ needs, but it can be hard to know how to get started.  This article will show you how to set your research objectives and choose the method so that you can uncover the information you need.

When to do research

The first thing to know is that there is never a bad time to do research. While there are many models and complicated diagrams to describe how products get built, essentially, you’re always in one of three core phases: conceptualising something brand new, in the middle of designing and/or building something, or assessing something that’s already been built.

There’s plenty to learn in each of those phases. If you’re just starting out, you need to focus on understanding your potential users and their context and needs so that you can understand your best opportunities to serve them. In other words, you’re trying to figure out what problems to solve and for whom. This is often called generative or formative research.

Research can add value at any stage, whether that’s conceptualising, designing or refining.

Once you’re actively building something, you’ll shift your focus to analysing the solutions that you’re coming up with, and making sure that they address the needs of your users. You’ll want to assess both conceptual fit and specific interactions quality.  We usually call this evaluative research.

When you have a live product or service, you’ll want to continue to assess how well you’re serving people’s needs, but you’ll also want to use research to discover how people change and how you can continue to provide value. At this point, you’ll be doing a mix of the generative type of work that is generally in the conceptual phase and evaluative work.

There is no cut-and-dried guide of exactly what methods to employ when, but there should never be a time that you can’t find an open question to investigate.

Determine your specific research objectives

At any given time, your team might have dozens of open questions that you could explore. I recommend keeping a master list of outstanding open questions to keep track of possible research activities, but focusing on answering just one open question at a time. The core goal of a study will determine which method you ultimately use.

If you need help coming up with research goals, consider things like:

  • the stage of the project you’re in
  • what information you already know about your users, their context, and needs
  • what your business goals are
  • what solutions already exist or have been proposed
  • or where you think there are existing issues.

The questions might be large and very open, like “who are our users?” or more targeted things like “who uses feature x most?” or “what colour should this button be?” Those are all valid things to explore, but require totally different research methods, so it’s good to be explicit.

Once you’ve identified open questions, you and the team can prioritise which things would be riskiest to get wrong, and therefore, what you should investigate first. This might be impacted by what project phase you’re in or what is currently going on in the team. For instance, if you’re in the conceptual phase of a new app and don’t have a clear understanding of your potential user’s daily workflows yet, you’d want to prioritize that before assessing any particular solutions.

From your general list of open questions, specify individual objectives to investigate. For instance, rather than saying that you want to assess the usability of an entire onboarding workflow, you might break down the open questions into individual items, like, “Can visitors find the pricing page?” and “Do potential customers understand the pricing tiers?”

You can usually combine multiple goals into a single round of research, but only if the methods align. For instance, you could explore many different hypotheses about a proposed solution in a single usability test session. Know that you’ll need to do several rounds of different types of research to get everything answered and that is totally OK.

Looking at data types

After determining your research goal, it’s time to start looking at the kind of information you need to answer your questions.

There are two main types of data: quantitative and qualitative.

Quantitative data

Quantitative data measures specific counts collected, like how many times a link was clicked or what percentage of people completed a step. Quantitative data is unambiguous in that you can’t argue what is measured. However, you need to understand the context to interpret the results.

Quantitative data helps us understand questions like: how much, how many and how often?

For instance, you could measure how frequently an item is purchased. The number of sales is unchangeable and unambiguous, but whether 100 sales is good or bad depends on a lot of things. Quantitative research helps us understand what’s happening and questions like: how much, how many, how often. It tends to need a large sample size so that you can feel confident about your results.

Common UX research methods that can provide quantitative data are surveys, a/b or multivariate tests, click tests, eye tracking studies, and card sorts.

Qualitative data

Qualitative data is basically every other sort of information that you can collect but not necessarily measure. These pieces of information tend to provide descriptions and contexts, and are often used to describe why things are happening.

Qualitative data needs to be interpreted by the researcher and the team and doesn’t have a precise, indisputable outcome. For instance, you might hear people talk about valuing certain traits and note that as a key takeaway, but you can’t numerically measure or compare different participant’s values. You don’t need to include nearly as many sessions or participants in a qualitative study.

Common UX research methods that can provide qualitative data are usability tests, interviews, diary studies, focus groups, and participatory design sessions.

Some methods can produce multiple types of data. For instance, in a usability study, you might measure things like how long it took someone to complete a task, which is quantitative data, but also make observations about what frustrated them, which is qualitative data. In general, quantitative data will help you understand what is going on, and qualitative data will give you more context about why things are happening and how to move forward or serve better.

Behavioural vs attitudinal data

There is also a distinction between the types of research where you observe people directly to see what they do, and the type where you ask for people’s opinions.

Any direct-observation method is known as behavioural research. Ethnographic studies, usability tests, a/b tests, and eye tracking are all examples of methods that measure actions. Behavioral research is often thought of as the holy grail in UX research, because we know that people are exceptionally bad at predicting and accurately representing their own behaviour. Direct observation can give you the most authentic sense of what people really do and where they get stuck.

By contrast, attitudinal research like surveys, interviews, and focus groups asks for self-reported information from participants. These methods can be helpful to understand stated beliefs, expectations, and perceptions. For instance, you might interview users and find that they all wish they could integrate your tool with another tool they use, which isn’t necessarily an insight you’d glean from observing them to perform tasks in your tool.

It’s also common to both observe behaviour and ask for self-reported feedback within a single session, meaning that you can get both sorts of data, which is likely to be useful regardless of your open question.

Other considerations

Even after you’ve chosen a specific research method, there are a few more things you may need to consider when planning your research methods.

Where to conduct

It’s often ideal to be able to perform research in the context of how a person normally would use your product, so you can see how your product fits into their life and observe things that might affect their usage, like interruptions or specific conditions.

For instance, if you’re working on a traffic prediction application, it might be really important to have people test the app while on their commute at rush hour rather than sitting in a lab in the middle of the day. I recently did some work for employees of a cruise line, and there would have been no way to know how the app really behaved until we were out at sea with satellite internet and rolling waves!

Context for research is important. If you can, get as close as possible to a real scenario of when someone would use your product.

You might have the opportunity to bring someone to a lab setting, meet them in a neutral location, or even intercept them in a public setting, like a coffee shop.

You may also decide to conduct sessions remotely, meaning that you and the participant are not in the same location. This can be especially useful if you need to reach a broad set of users and don’t have travel budget or have an especially quick turnaround time.

There is no absolute right or wrong answer about where the sessions should occur, but it’s important to think through how the location might affect the quality of your research and adjust as much as you can.

Moderation

Regardless of where the session takes place, many methods are traditionally moderated, meaning that a researcher is present during the session to lead the conversation, set tasks, and dig deeper into interesting conversation points. You can tend to get the richest, deepest data with moderated studies. But these can be time-consuming and require a good deal of practice to do effectively.

You can also collect data when you aren’t present, which is known as unmoderated research. There are traditional unmoderated methods like surveys, and variations of traditional methods, like usability tests, where you set tasks for users to perform on their own and ask them to record their screen and voice.

Unmoderated research takes a bit more careful planning because you need to be especially clear and conscious of asking neutral questions, but you can often conduct them faster, cheaper, and with a broader audience traditionally moderated methods. Whenever you do unmoderated research, I strongly suggest doing a pilot round and getting feedback from teammates to ensure that instructions are clear.

Research methods

Once you’ve thought through what stage of the product you’re in, what your key research goals are, what kind of data you need to collect to answer your questions, and other considerations, you can pinpoint a method that will serve your needs. I’ll go through a list of common research methods and their most common usages.

Usability tests: consist of asking a participant to conduct common tasks within a system or prototype and share their thoughts as they do so. A researcher often observes and asks follow up questions.

Common usages: Evaluating how well a solution works and identifying areas to improve.

UX interview: a conversation between a researcher and a participant, where the researcher usually looking to dig deep into a particular topic. The participant can be a potential end user, a business stakeholder or teammate.

Common usages: Learning basics of people’s needs, wants, areas of concern, pain points, motivations, and initial reactions.

Focus groups: similar to interviews, but occur with multiple participants and one researcher. Moderators need to be aware of potential group dynamics dominating the conversation, and these sessions tend to include more divergent and convergent activities to draw out each individual’s viewpoints.

Common usages: Similar to interviews in learning basics of people’s needs, wants, areas of concern, pain points, motivations, and initial reactions. May also be used to understand social dynamics of a group.

Surveys: lists of questions that can be used to gather any type of attitudinal behaviour.

Common usages: Attempting to define or verify scale of outlook among larger group

Diary study: a longitudinal method that asks participants to document their activities, interactions or attitudes over a set period of time. For instance, you might ask someone to answer three questions about the apps they use while they commute every day.

Common usages: Understanding the details of how people use something in the context of their real life.

Card sortsa way to help you see how people group and categorise information. You can either provide existing categories and have users sort the elements into those groupings or participants can create their own.

Common usages: Help inform information architecture and navigation structures.

Tree tests: the opposite of card sorts, wherein you provide participants with a proposed structure and ask them to find individual elements within the structure.

Common usages: Help assess a proposed navigation and information architecture structure.

A/B testing: Providing different solutions to audiences and measuring their actions to see which better hits your goals.

Common usages: Assess which of two solutions performs better.

Christian Rohrer and Susan Farrell also have great cheat sheets of best times to employ different UX research methods.

Wrapping up

To get the most out of UX research, you need to consider your project stage, objectives, the type of data that will answer your questions, and where you want to conduct your research.

As with most things in UX, there is no one right answer for every situation, but after reading this article you’re well on your way to successfully conducting UX research.

Want to dive deeper into UX research methods? Try Amanda’s latest course, Recruiting and Screening UX Research Participants on Skillshare with 2 months’ free access.  

The post Choosing the Right UX Research Method appeared first on UX Mastery.

]]>
https://uxmastery.com/choosing-right-ux-research-method/feed/ 5 63885
Getting Started with Popular Guerrilla UX Research Methods https://uxmastery.com/popular-guerrilla-ux-research-methods/ https://uxmastery.com/popular-guerrilla-ux-research-methods/#respond Fri, 03 Nov 2017 02:41:37 +0000 http://uxmastery.com/?p=61865 Amanda's last article covered how to “guerilla-ise” traditional UX research methods to fit into a short timeline, and when it makes the most sense to use them.

Now, she's back to walk us through some of the most popular guerilla methods—live intercepts, remote and unmoderated studies, and using low fidelity prototypes. She covers pros, cons and tips to make sure you make the most of your guerilla research sessions.

The post Getting Started with Popular Guerrilla UX Research Methods appeared first on UX Mastery.

]]>
In my last article, I talked about how you can “guerilla-ise” traditional UX research methods to fit into a short timeline, and when it makes the most sense to use them. Read the post here.

This time, I’ll walk you through some of the most popular guerilla UX research methods: live intercepts, remote and unmoderated studies, and using low fidelity prototypes.

I’ll cover pros, cons and tips to make sure you make the most of your guerilla research sessions.

Conducting research in public

Often the go-to guerilla technique is to skip the formal participant recruitment process and ask members of the public to take part in your research sessions. Live intercepts are often used as shortened versions of usability tests or interviews.

Getting started

Setting up is easy—all you need is a public space where you can start asking people for a few minutes to give you feedback. A cafe or shopping centre usually works well. 

This is a great way to get lots of feedback quickly, but approaching people takes a little courage and getting used to. 

I find it helps to put up a sign that publicises the incentive you’re offering, and if possible, identifying information like a company logo. This small bit of credibility makes people feel more comfortable.

Make sure you have a script prepared for approaching people. You don’t need to stick to it every time, but make sure you mention where you work or who your client is, your goal is, their time commitment and their compensation.

Try something like:

Hi, I’m [firstname] and I’m working for [x company] today. We’re trying to get some feedback on [our new feature]. If you have about [x minutes] to chat, I can offer you a [gift card/incentive].

Be sure to be friendly, but not pushy. Give people the chance to opt out or come back later. Pro tip: I always take a piece of paper with time slots printed so that people can sign up for a later time.  

The location you choose has a major impact on how many people you talk to and the quality of your results. Here are some tips for picking a good spot:

  • Pick a public place where there will be a high volume of people and make sure you get permission to be there. Aim to be visible but not in the way. A table next to the entrance works well.
  • Try to pick a place that you think your target audience will be. For instance, if you’re interested in talking to lawyers, pick a coffee shop near a big law office.
  • Look for stable wi-fi and plentiful wall plugs.
  • Regardless of where you choose, stake out the location ahead of the research session so you can plan accordingly.

A few limitations

There’s no doubt that intercepting people in public is a great way to get a high volume of participants quickly. Talking to the general population, however, is best reserved for situations when you have a product or service that doesn’t require specific knowledge, contexts, or outlooks.

If you’re doing a usability test, you could argue that whatever you build should be easy enough for anyone to figure out, so you can still get feedback. Just be aware that you may miss out on valuable insights that are specific to your target audience.

Let’s say you’re working on a piece of tax software. A risk is that you end up talking to someone who has a spouse that handles all the finances, or miss finding a labelling error that only tax accountants would know to report.

To avoid this, I always recommend asking a few identifying questions at the beginning of each session so you can analyse results appropriately. You don’t always need to screen people out, but you can choose how to prioritise their feedback in the analysis stage.

Context also matters. If you usability test a rideshare app on a laptop in a coffee shop, but most people will use the app on their phones on a crowded street, you may get misleading feedback.

Watch for bias when user-testing in a cafe. Photo via Unsplash

You should also be aware that you may run into bias by intercepting all your participants from one location. Think about it: the people that are visiting an upscale coffee shop in a business centre on a weekday are likely to be pretty different than the people who are stopping at a gas station for coffee in the middle of the night. Again, try to choose your intercept location based on your target audience and consider going to a few locations to get variety.

Keep in mind that only a certain type of person is going to respond positively and take the time to give you feedback. Most people will be caught off guard, and may be suspicious or unsure what to expect. You won’t have much time to give participants context or build rapport, so be especially conscious of making them feel comfortable.

Some final tips:

  • Set expectations clearly. Tell participants right away how long you’ll talk to them and how you’ll compensate them for their time. Be clear about what questions you’ll ask or tasks you’ll present and what they need to do.
  • Pay extra attention to participant comfort. Give them the option to leave at any time and put extra emphasis on the fact that you’re there to gather feedback, not judge them or their abilities. Try to record the sessions or not take notes the whole time, so you can make eye contact and read body language.
  • Remember standard rules of research: don’t lead participants, get comfortable with silence, and ask questions that participants can easily answer. Be extra careful asking about sensitive topics such as health or money. In fact, I don’t recommend intercepting people if you need to talk about very sensitive topics.

Remote and unmoderated studies

Taking the researcher out of the session is another proven way to reduce the time and cost of research. This is achieved through running remote and unmoderated research sessions.

Getting started

Traditional research assumes that a researcher is directly conducting sessions with participants, or moderating the sessions. Unmoderated research just means that the participants respond without the researcher present. Common methods include diary studies, surveys or trying out predetermined tasks in a prototype.

The core benefit is that people can participate simultaneously so you can collect many responses in a short amount of time. It’s often easier to recruit too, because there are no geographic limitations and participants don’t have to be available at a specific time.

You plan unmoderated research much like you do moderated research: set your research goal, select an appropriate method to answer your open questions, determine participants, and craft your research plan. The difference in unmoderated sessions is that you need to be especially careful about setting expectations and providing clear directions, because you won’t be there during the session. Trial runs are especially important in unmoderated sessions to catch unclear wording and confusing tasks.

You can also conduct remote research, which means that you’re not physically in the same place as your participant. You can use video conferencing tools to see each other’s faces and share screens. Remote sessions are planned in a similar vein to in-person sessions, but you can often reach a broader set of people when there are no geographic limits.

A few limitations

Any time you conduct sessions remotely or choose unmoderated methods, you run the risk of missing out on observing context or reading body language. With unmoderated sessions, can’t dig deeper when someone has an interesting piece of feedback. That’s still better than not collecting data, but you should take it into consideration when you’re analysing your data and making conclusions.

Low fidelity prototypes

If you want to invest less effort upfront, and iterate quickly, low fidelity prototypes are a good option.

In this scenario, you forego fully functional prototypes or live sites/applications and instead use digitally linked wireframes or static images.

You can even use paper prototypes, where you sketch a screen on paper and simulate the interaction by switching out which piece of paper is shown.

Getting started

Low fidelity prototypes, especially paper, are less time consuming to make than digital prototypes, which makes them inexpensive to produce and easy to iterate. This sort of rapid cycling is especially useful when you’re in the very early conceptual stages and trying to sort out gut reactions.

You run a usability test with a low fidelity prototype just like you would run any other usability test. You come up with tasks and scenarios that cover your key questions, recruit participants, and observe as people perform those tasks.

A few limitations

For this guerrilla technique, you have to be especially careful to ask participants to think aloud and not lead or bias them, because there can be a huge gap in their expectations and yours. For paper prototypes in particular, a moderator must be present to simulate the interactions. I recommend in-person sessions for any sort of test with low fidelity prototypes.

Keep in mind that you can get false feedback from low-fidelity wireframe testing. It can be difficult for participants to imagine what would really happen, and they may get stuck on particular elements or give falsely positive feedback based on what they imagine. Take this into consideration when analysing the results, and be sure that you conduct multiple rounds of iterative research and include high-fidelity prototypes or full beta tests in your long-term research plan.

Wrapping up

When in doubt about the results of any guerilla research test, I recommend running another study to see if you get the same results.

You can execute the exact same test plan, or even try to answer the same question with a complementary method. If you arrive at similar conclusions, you can feel more confident, and if not, you’ll know that you need to keep digging. When you’re researching guerilla style, you can always find more time to head back to the jungle for more sessions.

Take a look at my article linked below for tips on reducing scope, and the best times to use guerilla methods. Happy researching!

Further reading

The post Getting Started with Popular Guerrilla UX Research Methods appeared first on UX Mastery.

]]>
https://uxmastery.com/popular-guerrilla-ux-research-methods/feed/ 0 61865
Going Guerrilla: How to Fit UX Research into Any Timeframe https://uxmastery.com/guerrilla-ux-research/ https://uxmastery.com/guerrilla-ux-research/#respond Thu, 19 Oct 2017 04:51:29 +0000 http://uxmastery.com/?p=61304 As more and more companies realise the value of UX research, “guerilla” methods have become a popular way to squeeze research into limited budgets and short timelines. This often means reducing scope and/or rigor. The key to successful guerilla research is to strike the right balance to hit time and budget goals, but still be rigorous enough to gather valuable feedback.

So when is the best time to tackle your research guerilla style?

The post Going Guerrilla: How to Fit UX Research into Any Timeframe appeared first on UX Mastery.

]]>
As more and more companies realise the value of UX research, “guerrilla” methods have become a popular way to squeeze research into limited budgets and short timelines. Those of us working in agile sprints often have even less dedicated time for research.

When I say guerrilla research, I don’t mean go bananas or conduct jungle warfare research. Guerrilla research is really just a way to say that you’ve taken a regular UX research method and altered it to reduce time and cost.

To do so, you often end up reducing scope and/or rigour. The key to successful guerrilla research is to strike the right balance to hit time and budget goals, but still be rigorous enough to gather valuable feedback.

Read on for a framework for reducing any research method and an overview of the best time to use guerrilla tactics.

If you’re looking for practical advice on using guerilla research methods, take a look at my second article: Getting Started with Popular Guerrilla UX Research Methods

Crafting your guerilla plan

You can “guerrilla-ise” any UX research method, and there’s almost never one single correct way to do so. That said, qualitative techniques like usability tests and interviews lend themselves especially well to guerrilla-isation.

The easiest way I’ve found to plan guerrilla research is to start by determining how you’d do the research if you had desired time and budget. Then work backwards to find the elements you can toggle to make it work for the situation. The first place I look to cut is scope of the research question.

Let’s say your team is working on a new healthcare application and wants to assess the usability of the entire onboarding process. That’s an excellent goal, but pretty broad. Perhaps you could focus your study just on the first few steps of the signup process, but not the follow-up tutorial, or vice versa.

Once you’ve narrowed down your key research goals, you can start looking at what sorts of methods will answer your questions. The process for choosing a research method is the same, regardless of whether you’re trying to go guerrilla or not. For a great summary of choosing a method, take a look at Christian Rohrer’s excellent summary on NNG’s blog or this UX planet article.

Besides narrowing the scope of your research goal, think about the details that make up a study. This includes questions such as:

  • What do you need to build or demonstrate?
  • How many sessions or participants do you need?
  • How will you recruit them?
  • What’s the context of the studies?

Then you can take a look at all those elements, identify where your biggest time and money costs are, and prioritise elements to shift.

Reducing scope

Let’s say, for example, that you determine the ideal way to test the onboarding flow of your new app is to conduct 10 one-hour usability sessions of the fully functional prototype. The tests will take place in a lab and you’ll have a participant-recruitment firm find participants that represent your main persona.

There are many ways you could shift to reduce time and costs in this example.

You could:

  • Run test sessions remotely instead of in a lab
  • Reduce the number of sessions overall
  • Run unmoderated studies
  • Build a simpler wireframe or paper prototype
  • Recruit participants on social media
  • Intercept people in a public location
  • Or a combination of these methods

To decide what to alter, look at what will have the biggest impact on time, budget, and validity of your results.

For example, if working with a recruiting firm will be time consuming and expensive, you’ll want to look for alternative ways to recruit. Intercepting people in public is what many of us envision when we think of guerrilla research. You could do that, or you could also find participants on social media or live-intercept them from a site or web app.

You may even decide to combine multiple guerilla-ising techniques, such as conducting fewer sessions and doing so remotely, or showing a simple prototype to people who you intercept.

Just remember, you don’t want to reduce time and effort so much that you bias your results. For instance, if you’re doing shorter sessions or recruiting informally, you may want to keep the same overall number of sessions so you have a reasonable sample size.

Best uses for guerrilla research

So, when is the best scenario to use guerrilla tactics in your research?

  • You have a general consumer-facing product which requires no previous experience or specialty knowledge OR you can easily recruit your target participants
  • You want to gather general first-impressions and see if people understand your product’s value
  • You want to see if people can perform very specific tasks without prior knowledge
  • You can get some value out of the sessions and the alternative is no research at all

And when should you avoid guerrilla methods?

  • When you’ll be researching sensitive topics such as health, money, sex, or relationships
  • When you need participants to have very specific domain knowledge
  • When the context in which someone will use your product will greatly impact their usage and you can’t talk to people in context
  • When you have the time or budget to do more rigorous research!

Guerrilla research is a great way to fit investigation into any timeframe or budget. One of its real beauties is that you can conduct multiple, iterative rounds of research to ensure you’re building the right things and doing so well.

If you have the luxury of conducting more rigorous research, take advantage, but know that guerrilla research is always a better option than no research at all.

Read the next article on getting started with common guerrilla techniques.

The post Going Guerrilla: How to Fit UX Research into Any Timeframe appeared first on UX Mastery.

]]>
https://uxmastery.com/guerrilla-ux-research/feed/ 0 61304
The Space Between Iterations https://uxmastery.com/the-space-between-iterations/ https://uxmastery.com/the-space-between-iterations/#respond Tue, 06 Jun 2017 07:20:19 +0000 http://uxmastery.com/?p=54438 The most important decisions made about any product often take place between iterations. You could argue that the timeframe between identifying key research findings and understanding what the next iteration will be is the most crucial to the future success of the product. Andy Vitale, UX Design Principal at 3M, talks us through his iterative approach to research.

The post The Space Between Iterations appeared first on UX Mastery.

]]>

The most important decisions made about any product often take place between iterations. You could argue that the timeframe between identifying key research findings and understanding what the next iteration will be is the most crucial to the future success of the product.

There are many activities that take place during this phase and even before it begins – not just by research or design teams, but stakeholders, developers and customers as well. Clear communication and collaboration is the primary driver for gaining overall alignment among decision makers as quickly as possible.

At 3M, we’re fortunate to have access to many customers, allowing us to take the iterative approach to research outlined in this article. Depending on the project, timeline and business realities, coordinating customer visits and travel usually takes place over the course of several weeks.

How researchers and designers collaborate

While initial research sessions are observational and focused around contextual inquiry, it doesn’t mean that design is on hold. It’s beneficial to have at least one designer and one stakeholder from the business attend the research sessions, preferably on-site, so there’s shared learning. Team members debrief after each visit and input findings into a shared document so that it’s accessible to everyone on the team. This prevents the possibility of forgetting key points or confusion between what users may have said or team members may have heard.

Start sorting and analysing research as soon as you can.

Aware of the need to identify trends, but cognisant of time between research visits, it’s important that researchers start to analyse and organise findings while designers start to explore potential solutions via sketches, moodboards and other design activities. Throughout this quick sketching process, the team should involve stakeholders and subject matter experts to ensure the accuracy of what is presented.

For us, this sketching process can sometimes happen in a hotel lobby, producing a sketch that we’ll share with customers. Customers appreciate the low fidelity of the sketches because it allows them to be involved in early validation and provides them with the opportunity to offer feedback. The accuracy of the content is important so that users aren’t distracted by missing data, and can focus on the intended functionality of the concept sketches.

This iteration cycle typically continues throughout the research phase, with the fidelity of designs increasing as research is analysed, revealing further insights on the behavioural trends of users.

Keeping teams aligned

Communication between designers and the cross-functional teams of stakeholders and developers is essential throughout the process to ensure the decisions made align with business goals, technical capabilities, and customer needs. Once there is alignment, it’s time to conduct more formal user testing (which we often do remotely), with the customers we visited. This user testing should also be iterative, with the prototype increasing in robustness each week.

This should all be part of an agile or design sprint process but sometimes, depending on the complexity of the problems to be solved and the bandwidth of the team, there may not be a designer embedded within individual scrum teams. If this is the case, and the team is focused on validating larger solutions as opposed to smaller features, it’s best to facilitate a workshop with the development teams and product owners to plan the agile implementation of the new design. As specific pieces of functionality are validated throughout the process the design team works with developers to prioritise and support their efforts.

Since software is iterative, the cycle continues. Once the features are launched and the results are measured, it’s time to assess the business and user needs and begin the process of working towards the next release iteration.

The post The Space Between Iterations appeared first on UX Mastery.

]]>
https://uxmastery.com/the-space-between-iterations/feed/ 0 54438
How to Turn UX Research Into Results https://uxmastery.com/how-to-turn-ux-research-into-results/ https://uxmastery.com/how-to-turn-ux-research-into-results/#comments Wed, 31 May 2017 00:00:37 +0000 http://uxmastery.com/?p=54493 We’ve all known researchers who “throw their results over the fence” and hope their recommendations will get implemented, with little result. Talk about futility! Luckily, with a little preparation, it’s a straightforward process to turn your research insights into real results.

The post How to Turn UX Research Into Results appeared first on UX Mastery.

]]>
We’ve all known researchers who “throw their results over the fence” and hope their recommendations will get implemented, with little result. Talk about futility! Luckily, with a little preparation, it’s a straightforward process to turn your research insights into real results.

To move from your research findings to product changes, you should set yourself two main goals.

First, to effectively communicate your findings to help your audience process them and focus on next steps.

Secondly, to follow through by proactively working with stakeholders to decide which issues will be addressed and by whom, injecting yourself into the design process whenever possible. This follow-through is critical to your success.

Let’s look at an end-to-end process for embracing these two main goals.

Effectively communicating your findings

Finding focus

When you have important study results, it’s exciting to share the results with your team and stakeholders. Most likely, you’ll be presenting a lot of information, which means it could take them a while to process it and figure out how to proceed. If your audience gets lost in details, there’s a high risk they’ll tune out.

The more you can help them focus and stay engaged, the more likely you are to get results. You might even consider having a designer or product owner work with you on the presentation to help ensure your results are presented effectively – especially if your associates were involved in the research process.

Engaging with your colleagues and stakeholders

You should plan to present your results in person – whether it’s a casual or formal setting – rather than simply writing up a report and sending it around. This way, your co-workers are more likely to absorb and address your findings.

You could present formally to your company’s leadership team if the research will inform a key business decision. Or gather around a computer with your agile teammates to share results that inform specific design iterations. Either way, if you’re presenting – especially if you allow for questions and discussion – you’re engaging with your audience. Your points are getting across and design decisions will be informed.

Why presentations matter

Here are a few ways your presentation can help your team focus on what to do with the findings:

  • Prioritise your findings (Critical, High, Medium, Low). This helps your audience focus on what’s most important and chunk what should be done first, second and so on. An issue that causes someone to fail at an important task, for example, would be rated as critical. On the other hand, a cosmetic issue or a spelling issue would be considered minor. Take both the severity and frequency of the issue into consideration when rating them. Remember to define your rating scale. Usability.gov has a good example. Other options are to use a three-question process diagram, a UX integration matrix (great for agile), or the simple but effective MoSCoW method.  
  • Develop empathy by sharing stories. We love to hear stories, and admire those among us who can tell the best ones. In the sterile, fact-filled workplace, stories can inspire, illuminate and help us empathise with those we’re designing for. Share the journeys your participants experienced, the challenges they need to overcome. Use a sprinkling of drama to illustrate the stakes involved; understanding the implications will help moderate the conversations and support UX decisions moving forward.
  • Illustrate consequences and benefits. Your leadership team will be interested if they know they will lose money, customers, or both if they don’t address certain design issues. Be as concrete as you can, using numbers from analytics and online studies when possible to make points. For example, you might be able to use analytics to show users getting to a key page, and then dropping off. This is even more effective if you can show via an online study that one version of a button, for example, is effective all the time, whereas the other one is not understood.
  • Provide design recommendations. Try to strike a balance between too vague and too prescriptive. You want your recommendations to be specific and offer guidance about how an interaction should be designed, without actually designing it. For example, you could say “consider changing the link label to match users’ expectations” or “consider making the next step in the process more obvious from this screen.” These are specific enough to give direction and serve as a jumping off point for designers.
  • Suggest next steps. It can help stakeholders to see this in writing, especially if they’re not used to working with a UX team. For example:
    • Meet to review and prioritise the findings.
    • Schedule the work to be done.
    • Assign the work to designers.

Presentations are an important first step, but your job as a researcher doesn’t end there. Consider your presentation an introduction to the issues that were found, and a jumping-off point for informing design plans.

The proactive follow through

You’ve communicated the issues. Now it’s time to dig in and get results.

Getting your priorities straight

Start by scheduling a discussion with your product manager – and possibly a representative each from the development and design teams – to prioritise the results, and put them on the product roadmap. It can be useful to take your user research findings – especially from a larger study – and group them together into themes, or projects.

Next, rate the projects on a grid with two axes. For example:

  • how much of a problem it is for customers could display vertically; and
  • how much effort it would be to design or redesign it (small, medium and large) could display horizontally.

Placing cards or sticky notes that represent the projects along these axes helps you see which work would yield the most value

Then compare this mapping to what’s currently on the product roadmap and determine where your latest projects fit into the overall plans. Consider that it often makes more sense to fix what’s broken in the existing product – especially if there are big problems – than to work on building new features. Conducting this and additional planning efforts together will ensure everyone is on the same page.

Working with your design team

Once it’s time for design work, participate in workshops and other design activities to represent the product’s users and ensure their needs are understood. In addition to contributing to the activities at hand, your role is to keep users’ goals and design issues top of mind.

Since the focus of the workshop – or any design activity – early on is solving design problems, it could be useful to post the design problems and/or goals around the room, along with user quotes and stories. A few copies of complete study findings in the room, plus any persona descriptions, are useful references. The workshop to address design problems could be handled several ways – storyboarding solutions, drawing and discussing mockups, brainstorming. But the goal is to agree on problems you’re trying to solve, and come up with possible solutions to solve them.

As the design team comes up with solutions, remember to iteratively test them with users. It’s useful for designers to get regular feedback to determine whether they’re improving their designs, and to get answers to new design questions that arise throughout the process. All of this helps designers understand users and their issues and concerns.

Achieving your end game

One key to getting your results implemented is simply remembering to consider stakeholders’ goals and big picture success throughout the research and design process. The best way to do this is to include them in the research planning – and in the research observations – to make sure you’re addressing their concerns all along. When presenting, explain how the results you are suggesting will help them meet their design and business goals.

Always remember that as the researcher you hold knowledge about your users that others don’t. Representing them from the presentation through the next design iteration is one key to your product’s success.

How do you make sure your hard-won research insights makes it through to design? Leave a comment or share in our forums.

Catch up with more of our latest posts on UX research:

The post How to Turn UX Research Into Results appeared first on UX Mastery.

]]>
https://uxmastery.com/how-to-turn-ux-research-into-results/feed/ 1 54493
Making an Impact with UX Research Insights https://uxmastery.com/making-an-impact-with-ux-research-insights/ https://uxmastery.com/making-an-impact-with-ux-research-insights/#comments Tue, 16 May 2017 07:01:47 +0000 http://uxmastery.com/?p=54091 You’ve completed your in-depth interviews, your contextual inquiry or your usability testing. What comes next? As UX practitioners know, when it comes to research, field work is only a fraction of the story.

How do you learn from mountains of data, and then ensure your insights create a tangible impact in shaping your product’s design? We couldn’t think of anyone more qualified to ask than the prolific Steve Portigal, user researcher extraordinaire.

The post Making an Impact with UX Research Insights appeared first on UX Mastery.

]]>
You’ve completed your in-depth interviews, your contextual inquiry or your usability testing. What comes next? As UX practitioners know, when it comes to research, field work is only a fraction of the story.

How do you learn from mountains of data, and then make sure your insights create a tangible impact in shaping your product’s design?

We couldn’t think of anyone more qualified to ask than the prolific Steve Portigal, user researcher extraordinaire. From analysis and synthesis through to framing your findings, Steve walks us through a few post-research considerations to keep top of mind for your next research project.

What tips do you have for converting insights from research into action?

It’s a lot of work. According to Cooper’s Jenea Hayes, it’s roughly two hours of analysis and synthesis for every hour of research. I get grumpy when people talk about coming back from a research setting with insights. Insights are the product of analysis and synthesis of multiple sessions. It may just me being semantic-pedantic, but there’s something off-putting about the perfunctory way people describe: “Oh I come back from the session and I write up my insights and there you go.”

I see two different stages in making sense of research. Step one is to collate all the debrief notes, the hallway conversations, the shower thoughts you’ve had following the experience of doing the research. It’s a necessary first step and it’s heavily skewed by what sticks in your mind. It produces some initial thoughts that you can share to take the temperature of the group.

The next step is to go back to the data (videos, transcripts, artefacts, whatever you have) and look at it fresh. You’ll always see something different happened than what you think, and that’s where the deeper learning comes from. It’s a big investment of time, and maybe not every research question merits it. But if you don’t go back to the data (and a lot of teams won’t do it, citing time pressure), you are leaving a lot of good stuff on the cutting room floor.

I’m also a big fan of keeping the activity of sense making (what is going on with people?) separate from the activity of actions (what should we about it?). You want to avoid jumping to a solution for as long as possible in the process, so that your solutions reflect as deep an understanding of the problem as possible. Set up a “parking lot” where you can dump solutions as they’ll come up anyway. Depending on your research question, work your way to a key set of conclusions about people’s behaviour. Based on those conclusions, explore a range of possible solutions.

In your analysis, how do you decide what’s important?

Take time at the beginning of the research to frame the problem. Where did this research initiate? What hypotheses – often implicit ones – do stakeholders have? What business decisions will be made as a result of this research?

What research reveals doesn’t always fit into the structure that is handed to you ahead of time, so knowing what those expectations are can help you with both analysis and communication. Some things are important to understand because they’re part of the brief. But other things are going to emerge as important because as you spend time with your conclusions you realise “Oh this is the thing!”

I had a colleague who would ask, as we were getting near to the end of the process, but still wallowing in a big mess “Okay, if we had to present this right now, what would you say?” This is a great technique for helping you stop looking intently at the trees and step back to see the forest.

How do you make sure research data takes priority over stakeholders’ opinions?

So many aspects of the research process are better thought of as, well, a process. Talking to stakeholders about their questions – and their assumptions about the answers – is a great way to start. In that kickoff stage, explain the process. Share stories and anecdotes from the field. Invite them to participate in analysis and synthesis. Their time is limited, but there are many lightweight ways to give them a taste of the research process as it proceeds.

You don’t want the results to be a grand reveal, but rather an evolution, so that they can evolve their thinking along with it. If you’re challenging closely held beliefs (or “opinions”), make a case: “I know we expected to learn X, but in fact, we found something different.” Separate what you learned about people from what should be done about it so that you can respond to pushback appropriately.

What are some common mistakes you see that stops research staying front and centre during the design process?

To summarise a few of the points I’ve made above, some of the common mistakes I see are:

  • Not including stakeholders in early problem-framing conversations
  • Not including a broader team in fieldwork and analysis
  • Delivering research framed as “how to change the product” rather than “what we learned about people” and “how to act on what we learned to impact the product”
  • Researchers not having visibility into subsequent decisions
  • Failing to deliver a range of types of research conclusions

How do you make sure your recommendations make it through to the next design iteration?

It’s challenging to ensure that research travels through any design or development process intact. Ideally, you’re involved as the work goes forward, sitting in meetings and design reviews to keep connecting it back to the output of the research, but think about the different aspects of the research that might take hold to help inform future decisions.

Is it stories about real people and their wants and needs? Is it a model or framework that helps structure a number of different types of users or behaviours? Is it a set of design principles? Or is it the specific recommendations? Often it’s a combination of several of these.  

About Steve Portigal

Steve Portigal photo

Steve is the Principal at Portigal Consulting LLC – a consultancy that helps companies discover and act on new insights about their customers and themselves. He is the author of Interviewing Users: How to Uncover Compelling Insights and recently Doorbells, Danger, and Dead Batteries: User Research War Stories. In addition to being an in-demand presenter and workshop leader, he writes on the topics of culture, design, innovation and interviewing users, and hosts the Dollars to Donuts podcast. He’s an enthusiastic traveller and an avid photographer with a Museum of Foreign Groceries in his home.

The post Making an Impact with UX Research Insights appeared first on UX Mastery.

]]>
https://uxmastery.com/making-an-impact-with-ux-research-insights/feed/ 2 54091
Using A/B Testing to Drive Constructive Conflict with Stakeholders https://uxmastery.com/using-ab-testing-to-drive-constructive-conflict-with-stakeholders/ https://uxmastery.com/using-ab-testing-to-drive-constructive-conflict-with-stakeholders/#comments Thu, 30 Mar 2017 09:24:06 +0000 http://uxmastery.com/?p=52878 Creating a culture of user experience involves asking uncomfortable questions; the key is to navigate that friction so that people feel encouraged not just to contribute but also to question ideas.

A/B testing can help teams separate concerns and learn to disagree constructively. Minutia gets sorted out quickly, the work moves forward, and most importantly you help create a framework for challenging ideas, not people. Here's how.

The post Using A/B Testing to Drive Constructive Conflict with Stakeholders appeared first on UX Mastery.

]]>
“Tell me about a time when you disagreed with a coworker…” Hiring managers use questions like this to get a sense of how a job candidate will handle disagreements and work with others under difficult circumstances.

It’s a complicated topic for user experience, where ideas are assumptions to be validated and opinions are all on equal footing. In the business of measuring “better,” we’re expected to think critically and argue from reason.

If you’re lucky enough to work with an experienced and diverse group of people, opinions will vary and disagreements will be the norm. The problem is that fear of controversy can make people less likely to engage in the kinds of useful debates that lead to good designs.

Creating a culture of user experience involves asking uncomfortable questions; the key is to navigate that friction so that people feel encouraged not just to contribute but also to question ideas.

A/B testing is a good way to help teams separate concerns and learn to disagree constructively. Minutia gets sorted out quickly, the work moves forward, and most importantly you help create a framework for challenging ideas, not people.

A/B Testing is fast, good and cheap

Decisions aren’t always obvious, and you may not have the luxury of an executive decision-maker to break an impasse. More often than not, you have to work with people to rationalise a direction and find your way out of the weeds.

The exercise of running an A/B test can help disentangle design intent from mode of execution. When stakeholders see how easy it is to separate fact from fiction, there’s less fear of being wrong and ideas can flow and be rejected, developed or improved upon more freely.

Another perk of A/B testing is that platforms like Optimizely or VWO let you run experiments with live users on your website. “In the wild” testing give stakeholders a chance to see for themselves how their ideas stand to impact customer reality.

It’s now easier than ever to design and deploy low-risk A/B experiments, and there’s no excuse not to do it. But like any tool, A/B testing has its limitations – and no product has a warranty that protects against misuse.

Draws are boring, fans want KOs

What happens when an A/B test fails to deliver clear results? 

A/B testing software is often marketed around dramatic examples that show impactful decisions made easy through testing. Stakeholders may be conditioned to think of testing in terms of winners and losers, and experiments that don’t produce clear outcomes can create more questions than answers:

“We A/B tested it, but there was no difference, so it was a waste…”

A lack of familiarity with the domain can lead to criticism of the method itself, rather than its use. This “carpenter blaming the tools” mentality can disrupt stakeholders’ ability to work harmoniously – and that is not the kind of conflict that is constructive.

The reality is that not every A/B test will yield an obvious winner, and this has partly to do with how experiments are designed. For better or worse, tools like VerifyApp now make it easy to design and deploy tests. Like anything else, it’s garbage in, garbage out – and there’s no sample size large enough to turn a noisy experiment into actionable insights.

Such heartburn is only made worse when teams undertake A/B testing without a clear sense of purpose. A/B tests that aren’t designed to answer questions defy consistent interpretation, and only add to the gridlock of subjective analysis.

As an example, I’ll use a topic which I think is well suited for A/B testing: call-to-action buttons.

Consider the following experiment between 2 versions of the same call-to-action. At a glance, the outcome may seem clear:

What makes this test result problematic is that there are multiple design-related differences (font weight, content, copy length, button colour) between A and B. So tactically, we know one approach may convert better, but we don’t really know why. Ultimately, this experiment asks the question:

“Which converts better?”

…and only narrows it down to at least 4 variables.

When you’re new to A/B testing, a few noisy experiments are a forgivable offence and you may find you get more traction by focusing on what stakeholders are doing right. Any testing is better than none, but habits form quickly, so it’s important to use early opportunities like this to coach people on how experimental design affects our ability to get answers.

Another reason pairwise experiments don’t always have clear “winners” is because sometimes, there’s just no difference. A statistical tie is not a sexy way to market A/B testing tools. Consider another hypothetical example:

Notice that there’s only 1 difference between A and B – the button text label. Is one version better than the other? Not really. It’s probably safe for us to conclude that, for the same experimental conditions, the choice between these 2 text labels doesn’t really impact conversion rate.

Does a stalemate make for a compelling narrative? Maybe not – but now we know something we didn’t before we conducted this anticlimactic experiment.

So while a tie can be valid experimentally, it may not help defuse some of the emotionally charged debates that get waged over design details. That is why it’s so critical to approach A/B testing as a way to answer stakeholder questions.

Want good answers? Ask good questions

When the work is being pulled in different directions, A/B testing can deliver quick relief. The challenge is that the experiments are only as good as the questions are designed to answer.

With a little coaching, it’s not difficult to help teams rely less on subjective interpretation and wage more intelligent arguments that pit idea vs. idea. It falls on UX teams to champion critical thinking, and coach others on how to consider design ideas as cause-and-effect hypotheses:

Will the choice of call to action button colour impact conversion rates?

Does photograph-based vs. illustrated hero artwork impact ad engagement?

Is Museo Sans an easier font to read than Helvetica Neue?

The act of formulating experimental questions helps to reveal the design intent behind specific ideas, and cultivates a sense of service to project vs. individual goals. When stakeholders share an understanding of the intent, it’s easier to see they’re attacking the same problem different ways.

It’s also imperative to keep experiments simple, and the best way to do that is to focus on one question at a time.

Consider the button example from earlier, where we concluded that the choice of 2 text labels had little impact on conversion rates. To see what else might move this needle, we can try to manipulate another variable, such as button colour:

This experiment asks the question:

Will the choice of call to action button colour impact conversion rates?

There’s only 1 difference between A and B – button colour. Now, we not only have an answer we can use tactically, but strategically we have a good idea why one converts better than the other.

Summary

Stakeholders won’t always see eye to eye with each other, and that’s no reason to shy away from or stifle conflict. Good ideas benefit from scrutiny, and quick A/B experiments help get people get in the habit of asking tough questions. The answers lead to better tactical decisions, and help drive a culture of healthy debate.

A/B testing is just one tactic you can use within a strategy to win over stakeholders. If you want to help your team keep each other honest, try A/B testing as a first step towards creating a culture where co-workers feel more comfortable disagreeing as a means to a constructive end. 

Do you have experience using A/B testing to drive conversations with stakeholders? Share your experience in the comments or the forums

This month, we’ve been looking at stakeholder management. Catch up on our latest posts:

The post Using A/B Testing to Drive Constructive Conflict with Stakeholders appeared first on UX Mastery.

]]>
https://uxmastery.com/using-ab-testing-to-drive-constructive-conflict-with-stakeholders/feed/ 2 52878
Transcript: Ask the UXperts: Learning from the comic, tragic & astonishing moments in user research — with Steve Portigal https://uxmastery.com/transcript-user-research-steve-portigal/ https://uxmastery.com/transcript-user-research-steve-portigal/#respond Thu, 15 Dec 2016 21:49:00 +0000 http://uxmastery.com/?p=49737 Yesterday we hosted Steve Portigal in our Slack channel for the last Ask the UXperts session for 2016.

It was an informative and entertaining session. Here is the full transcript.

The post Transcript: Ask the UXperts: <em>Learning from the comic, tragic & astonishing moments in user research</em> — with Steve Portigal appeared first on UX Mastery.

]]>
In a cracking finale to an amazing year of Ask the UXperts sessions, yesterday we hosted the amazing Steve Portigal in our Slack channel.

Steve has recently written a new book titled Doorbells, Danger, and Dead Batteries. 

It’s an amazing collection of ‘war stories’ – things that have gone wrong in the field while carrying out user research – along with valuable lessons learned.

If you’d like to grab a copy you can do that here. Use the code UXMASTERY20 to claim a 20% discount.

Steve chatted about the book, shared some amazing stories and lessons, and made us laugh.

The session was particularly memorable for me, because I moderated it live from the highways of New Zealand’s North Island.

We’re going to take a break from these sessions for the holidays, but we’ll be back in full force with a pretty exciting line up of characters in the new year, so keep your eyes out for the announcements (and make sure you join our mailing list).

If you didn’t make the session because you didn’t know about it, make sure you join our community to get updates of upcoming sessions.

If you’re interested in seeing what we discussed, or you want to revisit your own questions, here is a full transcript of the chat.

Transcript

hawk
2016-12-14 21:01
So welcome everyone! For those of you that haven’t been to one of these sessions before, this is how we roll:

ceara
2016-12-14 21:01
@sbarnat thanks!

hawk
2016-12-14 21:01
I’ll introduce Steve, Steve will give an introduction to the topic, and then we’ll throw it open to your questions.

hawk
2016-12-14 21:01
If things get busy, I’ll queue questions in a back channel so that we don’t miss anyone.

hawk
2016-12-14 21:01
And I’ll post a full transcript of the session up on http://uxmastery.com tomorrow.

hawk
2016-12-14 21:01
So on that note, a huge thanks to @steveportigal for your time today – it’s much appreciated. We’re honoured to have you here.

jelto
2016-12-14 21:01
sounds great !

hawk
2016-12-14 21:01
Steve has just written a brand new book titled Doorbells, Danger and Dead Batteries.

hawk
2016-12-14 21:02
You can find out more about it and get your own copy here. Get 20% off with the code UXMASTERY20.

hawk
2016-12-14 21:02
And a bit more about Steve:

Steve Portigal helps companies to think and act strategically when innovating with user insights.
He is principal of Portigal Consulting, and the author of two books: The classic Interviewing Users: How To Uncover Compelling Insights and new, Doorbells, Danger, and Dead Batteries: User Research War Stories.

He’s also the host of the Dollars to Donuts podcast, where he interviews people who lead user research in their organizations (including Citrix, Airbnb, eBay and Pinterest).

hawk
2016-12-14 21:03
So @steveportigal – I’ll hand over to you for some more info on today’s topic

steveportigal
2016-12-14 21:04
Actually, I’m not at Cape Flanderval today I’m outside of San Francisco.

kimbieler
2016-12-14 21:04
has joined #ask-steve-portigal

steveportigal
2016-12-14 21:04
Thanks for having me.

danny.w
2016-12-14 21:04
Hi Steve!

mabes
2016-12-14 21:04
has joined #ask-steve-portigal

dcollins5280
2016-12-14 21:04
Great to have you Steve! Thanks for joining us :slightly_smiling_face:

steveportigal
2016-12-14 21:04
I’m really excited that the book is out – JUST out a few days ago – and this is the first time I’ve had a chance to really talk with people about this topic – stoked to do so in an interactive setting. At least to talk about it with people since the book came out I mean to say.

steveportigal
2016-12-14 21:05
I’ll talk about the book and I guess what I see as a bit of a mission behind the book for a bit, if I may.

steveportigal
2016-12-14 21:05
(YOU HAVE NO CHOICE!!!)

hawk
2016-12-14 21:05
Sounds good to me!

steveportigal
2016-12-14 21:06
I started collecting war stories formally about 4 years ago, I started gathering them informally before that. War Stories, just to define some terms are stories about the things that happen that we don’t discuss unless we’re at a bar or someplace “safe”

halvam
2016-12-14 21:06
has joined #ask-steve-portigal

steveportigal
2016-12-14 21:06
User research has plenty of war stories. The first time I started sharing these stories I was struck – oh wow, this is a thing we all have in common and why don’t we share it?

steveportigal
2016-12-14 21:07
I just had the instinct that this was worth doing and with Interviewing Users my last book I kind of left a hook in the book – I made the point that this is a way we can start learning the reality of this work, and I put a URL in the book that linked to a bunch of stories.

steveportigal
2016-12-14 21:07
And I started populating this part of the blog with stories. And as people wrote more stuff for me, my jaw just kept dropping further and further.

steveportigal
2016-12-14 21:07
Okay that’s an unrealistic visual image.

steveportigal
2016-12-14 21:08
I felt like this archive was growing and growing not only in size but also in meaning, ya know, when you feel like oh wow there’s something here.

chrisavore
2016-12-14 21:08
has joined #ask-steve-portigal

steveportigal
2016-12-14 21:08
So writing this book was a chance to look at all these stories – stories of things going wrong, of sad things, of scary things, of funny things, of loss of control, just stories about so much – a chance to look at them fresh and say, well WHAT does it reveal?

steveportigal
2016-12-14 21:08
It’s good to work with an editor and a publisher who will push you to do more than say just compile them.

steveportigal
2016-12-14 21:09
Anyway the process of examining the stories more deeply and considering what the “truths” (not to be too pretentious about it) are that are revealed, what does it mean when we have stories that are about people doing the “right” thing and it STILL goes wrong, how to learn from that, how to articulate that – that’s been my exploration for the past year in preparing this book.

steveportigal
2016-12-14 21:10
So the book is divided up into chapters OH WOW STEVE HOW INNOVATIVE thank you thank you

mallorychacon
2016-12-14 21:10
@steveportigal Very exciting to have you! I’m working just outside of San Francisco today too. Looking forward to hear about your new book!

steveportigal
2016-12-14 21:10
the chapters describe a different area of challenge we have in fieldwork, from emotions, to seeing “dirty stuff”, to participants, to judgement, to danger,

steveportigal
2016-12-14 21:10
and each chapter has an essay from me about the topic, a handful of stories that relate to the topic, and then a set of takeaways about how to improve our own practice.

jemrosario
2016-12-14 21:11
has joined #ask-steve-portigal

steveportigal
2016-12-14 21:11
Anyway, that’s kind of the elevator pitch – why the book exists, why I’m into the stories, why I think they are worth sharing – I can say more about all of this, but I’d rather slow down my intro here and see what we want to talk about, what you want to ask, what you want to share, etc.

hawk
2016-12-14 21:12
I’m dying to hear what your favourite story is…

markrs
2016-12-14 21:12
has joined #ask-steve-portigal

hawk
2016-12-14 21:12
And if anyone else has questions (either about the book or learning from mishaps) please jump in

alex.lee
2016-12-14 21:12
I’d like to know what you think is unique about user research that maybe different from standard ethnographic research?

hawk
2016-12-14 21:12
i love the idea of learning from other people’s mishaps (rather than making them myself) :wink:

steveportigal
2016-12-14 21:13
@hawk I told the folks at Rosenfeld Media when they asked a similar question that I love all my children equally :slightly_smiling_face:

fernandez_ux
2016-12-14 21:13
has joined #ask-steve-portigal

steveportigal
2016-12-14 21:13
There are some epic stories like Apala’s story about encountering a family fracas – being in the middle of it – in India – http://www.portigal.com/apalas-war-story-whose-side-is-the-researcher-on/

mallorychacon
2016-12-14 21:13
I’m curious if you found many interesting findings and insights in sessions that went drastically wrong by looking at the extremes of why something went wrong? Do you ever just throw sessions away or do you always try to find something in them?

steveportigal
2016-12-14 21:14
but I like looking at the smaller stories – that just take a small moment and consider it.

steveportigal
2016-12-14 21:14
like Susan Simon Daniels http://rosenfeldmedia.com/announcements/user-research-war-story-sigh-just-sigh/ – encountering a small moment of sadness and taking time with it

chrisoliver
2016-12-14 21:15
@steveportigal I’m curious to hear about dangerous situations and what could have been done to prevent them.

hawk
2016-12-14 21:15
Queuing questions now. I’ll mark them as acknowledged as we go (like this)

steveportigal
2016-12-14 21:15
@alex.lee. Oh you aren’t going to ask me to define “ethnography” or define “user research” are you? That seems such a fraught topic. And something that people luuuuuuv to debate. I think I made the point in Interviewing Users that if you call something ethnography, someone else will tel you that you are wrong.

chrisgeison
2016-12-14 21:16
has joined #ask-steve-portigal

steveportigal
2016-12-14 21:16
Do you have a definition or contrast that you use? I promise not to tell you that you’re wrong :slightly_smiling_face: :slightly_smiling_face:

alex.lee
2016-12-14 21:16
I’m a complete novice to user research as I come from academia so I am genuinely interested in hearing your perspective

alex.lee
2016-12-14 21:17
I also interviewed people extensively during my doctorate and I’m wondering if my approach should be different if I’m dealing with interviews with some product in mind

steveportigal
2016-12-14 21:18
@alex.lee I think there are ethnographic practices from academia that are about the context in which it’s done – part of a theoretical graduate education, no methods training, lots of theory reading, a long time in a country far away and no client. User Research uses many methods and takes us into the field to address a specific business concern?

steveportigal
2016-12-14 21:18
@mallorychacon I think an underlying theme in the book is around that exact thing. I found myself offering the same advice (in different words) for various chapters.

steveportigal
2016-12-14 21:18
ONE: know when to give up, at least be prepared to give up

steveportigal
2016-12-14 21:18
TWO: just keep going

steveportigal
2016-12-14 21:18
And I think about it like this

steveportigal
2016-12-14 21:18
for ONE, if things really suck, if it’s unsafe, if you are emotionally or physically at risk, then take care of yourself – self-care being a theme here

steveportigal
2016-12-14 21:19
for TWO: everything else being equal, if you think you can’t get anywhere with this research session, what do you have to gain by giving up?

steveportigal
2016-12-14 21:19
You can keep trying things, and see if you can do it.

steveportigal
2016-12-14 21:19
I have a story in Interviewing Users (recapped in the interview I linked to a moment ago) about a family that did NOT want us there, and we kept going and kept going and had a dramatically effective interview.

steveportigal
2016-12-14 21:20
I have a story in this book – and by story I mean a Steve anecdote, not a full war story from another research – about a guy who NEVER had any insight and I just kept trying and kept trying.

steveportigal
2016-12-14 21:20
So, I feel like having both those positions. KEEP GOING. WALK AWAY. And thinking about how and when and why – there’s no rule but to be prepared for both.

steveportigal
2016-12-14 21:21
@chrisoliver Danger is such a subjective thing and I think how we learn to parse that danger – again not a perfect thing – is something to hone.

steveportigal
2016-12-14 21:22
Some people are more sensitive to certain situations than others. Jon McNeil has a story about being in a sports car at night while a perhaps speed-taking participant drives them aggressively towards a strip club and he describes his poor client in the back seat, looking back and seeing him holding on and yet his client acknowledges “well, this is fieldwork”

steveportigal
2016-12-14 21:23
Jen Van Riet has a story about a session where a guy pulls out a gun, not in a weapony way, but he was carrying it and so it came up

steveportigal
2016-12-14 21:23
So you can READ those stories (or my precis here) and think, well, is that dangerous or not, how did THEY feel, how do I feel afterwards

steveportigal
2016-12-14 21:23
and how would I feel at the time.

steveportigal
2016-12-14 21:23
and I think it gives fuel to examine those issues and I think continuing to examine them is all we can do.

steveportigal
2016-12-14 21:24
She had the best title: Jennie’s Got A Gun

steveportigal
2016-12-14 21:24
Unless that’s too obscure for young’uns

steveportigal
2016-12-14 21:24

creativelaurels
2016-12-14 21:25
@steveportigal re: sigh. Were you surprised by the interviews “Rick’s” willingness to share his personal story with you in that setting? Does this sort of vulnerability come up often in that setting?

steveportigal
2016-12-14 21:26
@creativelaurels this story is by Susan Simon Daniels.

mallorychacon
2016-12-14 21:26
@steveportigal @alex.lee I found your discussion on ethnographic practices vs user research interesting. I studied Anthropology with a focus on ethnographic research and I draw the following lines 1) Ethnographic practices and research, in their origin are practiced over a long enough period of time to incorporate yourself into the practices and day-to-day lives of the people or species you are studying. This is important to draw those insights related to practices that may only be exposed to you over a period of time and to build honest trust. 2) Ethnographic practices in User Research are often looked at as Field Studies, meaning, you still go into your user’s home and environment, but you establish trust quickly and still gain rich insights about their environment while you may not get them about the day-to-day behaviors that they may perform

aquaruchi
2016-12-14 21:26
has joined #ask-steve-portigal

h_bookforest
2016-12-14 21:26
Having not yet read the book (but purchased), I am unclear on the proportion of – interviews going as expected and not too dangerously vs. war stories episodes – over a period of time or a project?

steveportigal
2016-12-14 21:26
Sorry, probably didn’t hit that point most clearly – The stories in this book come from 60+ user researchers over the world.

creativelaurels
2016-12-14 21:26
My mistake.

steveportigal
2016-12-14 21:26
no @creativelaurels not to worry – natural assumption without the book in your hand.

steveportigal
2016-12-14 21:27
And anyway, I’ll share my PoV – I think people sharing personal stuff is very common. In the chapter about emotion I make the point that I hope to see someone cry every study – not because I want to HARM people but I do feel a little frisson when the topic we are exploring – and it can be just about anything – provokes a strong emotional reaction – when people feel comfortable and close and can talk about the big stuff.

steveportigal
2016-12-14 21:28
Even talking about wine label design caused a woman to cry because it made her think of the baby she was trying to have.

mallorychacon
2016-12-14 21:28
@steveportigal Thank you for sharing about when to abandon – it’s definitely unique session to session.

fernandez_ux
2016-12-14 21:28
Nice to connect outside of the Twitter realm @steveportigal! I’m wondering if you can speak a little about the researchers’ reflection period after a “war story” , and what sort of ideas/learnings/wishes people may have had afterwards.

maadonna
2016-12-14 21:28
Wow. That’s why I don’t like doing research – I never want to be sitting with strnagers while they are crying. You are amazing

creativelaurels
2016-12-14 21:28
Thanks for sharing your pov, it’s an aspect of research I’m fascinated by.

steveportigal
2016-12-14 21:29
@h_bookforest for sure, war stories are the exception, but over a career they accrue. If everything is a war story, you’re probably doing something wrong :slightly_smiling_face:

steveportigal
2016-12-14 21:30
but there are elements of war stories all the time. Last week I was working with a team of newbie researchers and when they did their very first day of sessions (we conducted them in a lab), they had rehearsed the hell out of bringing people in and setting them up and greeting them, and set up the room with the right chairs etc. The first person

alex.lee
2016-12-14 21:30
@mallorychacon: your response got me wondering if it’s possible to draw longitudinal data from behavioral user patterns (like GPS tracking and cookies) that may draw equivalent ethnographic results. :thinking_face:

steveportigal
2016-12-14 21:30
they bring her in, and say welcome thanks for coming – you can have a seat

steveportigal
2016-12-14 21:30
she says “I Prefer To Stand.”

steveportigal
2016-12-14 21:30
which isn’t a war story but is definitely a monkey wrench.

steveportigal
2016-12-14 21:30
The third person told them that she had JUST found out she was pregnant. I mean, she got the call on the way into the office building and had not told anyone.

steveportigal
2016-12-14 21:31
So there are these elements of things going “off the rails” that researchers maybe take for granted the more they do and they don’t all produce epic fails or even cause problems but these stories reflect the most extreme

steveportigal
2016-12-14 21:31
and we can see nubbins of that every day.

steveportigal
2016-12-14 21:31
@fernandez_ux hi there! You ask about reflection, which is a great question.

steveportigal
2016-12-14 21:32
I think what’s great about this book – wow that sounds gross when I write that –

steveportigal
2016-12-14 21:32
okay what I’m excited about with gathering war stories – and of course, this isn’t the full set, this is a start in formalizing them but I want to see more people collecting and sharing their own stories

steveportigal
2016-12-14 21:33
anyway , having war stories as a thing – as War Stories – means it invites a chance to reflect. Ethnography is (I’m saying this wrong) the writing of a culture – the traditional work is very much about

steveportigal
2016-12-14 21:33
WRITING

steveportigal
2016-12-14 21:33
So you have the experience, and you step back and you reflect on it. And you write it up.

steveportigal
2016-12-14 21:33
I think some people have stories they have been telling for a long time.

steveportigal
2016-12-14 21:33
But I have people tell me all the time – oh I’m about to go do this project…I’m sure ‘ll have some stories – some war stories.

steveportigal
2016-12-14 21:34
So just putting the war stories mindset out there gives people a bit of permission to think of their own experiences of worthy of reflecting on, of taking out and looking at, and maybe doing something with and maybe not.

steveportigal
2016-12-14 21:34
I will also say almost all of those stories come from me soliciting them personally or on social media – who has a story, who has a story about x –

steveportigal
2016-12-14 21:34
as a practice we are conditioned to ya know

steveportigal
2016-12-14 21:34
do the work

steveportigal
2016-12-14 21:34
do the work

steveportigal
2016-12-14 21:34
report the worko

steveportigal
2016-12-14 21:34
(work I mean)

steveportigal
2016-12-14 21:35
not go and reflect on the work, and not write it up – not treat those experiences as anything except something that we did wrong.

mallorychacon
2016-12-14 21:35
@alex.lee it’s an intriguing thought. I think it’s crossing too many paths eventually – it would likely get messy. If you’re trying to gather behavioral data, you’re likely not going to pull quality longitudinal data because of the sheer issue of numbers – you can do quant and qual at the same time, but in my experience, it’s going to be a bit exhausting and take a long time. But that’s just in my experience, I’m open to combining methods and trying new things – it’d really depend on the question(s) you want to answer

steveportigal
2016-12-14 21:35
I’m hoping that by examining these stories we can have empathy for the fieldworkers in the stories and create a bit of future empathy for ourselves.

h_bookforest
2016-12-14 21:35
@steveportigal Thanks for the sense of scale/perspective, and clarifying the number of sources. I take that to be a few over decades, rather than many each year. I was a little worried for a moment there. Viewing UX from the outside looking in to learn more, I had not expected there to be lots of “war stories”, hence the surprise/uncertainty and my question.

steveportigal
2016-12-14 21:35
And what makes them war stories is that unlike the usual inspirational stories

steveportigal
2016-12-14 21:35
often the storyteller does NOT triumph.

steveportigal
2016-12-14 21:35
It will happen to us all.

steveportigal
2016-12-14 21:35
User research is hard – it’s impossible to do perfectly.

steveportigal
2016-12-14 21:36
So how will we treat ourselves and our colleagues when things are different than what we assumed they should be?

steveportigal
2016-12-14 21:36
We can learn from these stories.

steveportigal
2016-12-14 21:36
We can also learn from just the fact that we HAVE stories.

steveportigal
2016-12-14 21:36
catharsis, forgiveness, advice.

steveportigal
2016-12-14 21:36
Boy no wonder Buster likes this book so much!

kimbieler
2016-12-14 21:36
The hardest part of user research is doing it at all. I think a lot of us would be glad to have done it enough to have war stories. :wink:

steveportigal
2016-12-14 21:36

lukcha
2016-12-14 21:37
Yeah. I like that they’re an honest view – that we’re not all perfect researchers who have a perfect answer to every situation. We can share some authentic highs and lows to commiserate, celebrate and learn together.

alex.lee
2016-12-14 21:37
Yes there’s got to be a tension where you are being paid to do research in the interest of business vs when you are interfacing with people to find out their real needs. It sounds like war stories are also WICKED Problems – situations that arise from complexity not foreseen or anticipated by researcher.

steveportigal
2016-12-14 21:38
@alex.lee nice application of the wicked problem lens – yes, they aren’t solvable but as researchers in business we are expected to have everything buttoned up and perfect

steveportigal
2016-12-14 21:38
we want people to see the world in a new way and the reality of how messy that is versus how clean it’s expected to be is…a challenge

steveportigal
2016-12-14 21:38
I wish everyone could be like the client in the back of the car going “yep this is the real world”

steveportigal
2016-12-14 21:39
but once people get back to the office they start busting out slide decks with the #1 WOW experiences for the ecommerce platform and lose track – so stories stories stories – one way – not the only way – to help the experiences in all their grit and glory live on

steveportigal
2016-12-14 21:40
@lukcha one of the things I argued for in curating these stories is to not turn everything into a lesson. A natural urge I think we have is to say “I did this and I learned that”

hawk
2016-12-14 21:40
We’re at the end of the question queue so if you’re sitting on one, ask away

lukcha
2016-12-14 21:40
Are there some ethics around telling our own war stories? We obviously need to respect privacy of our participants, etc. ‘Gossip’ can be a natural and healthy way to share stories.

steveportigal
2016-12-14 21:41
Some drafts of stories had more pedagogy than storytelling – I think the stories can be sufficient. So you see a lot of them with conclusions that are “go with the flow” “carry on” – because there is no grand takeaway except the one that is there by implication,

steveportigal
2016-12-14 21:41
that we aren’t perfect as you say

steveportigal
2016-12-14 21:42
@lukcha I tell people – these are stories about the researcher not the research. I got a first draft yesterday that was basically a user research case study.

steveportigal
2016-12-14 21:42
I pushed back, that’s not really what we need to see – it’s about what happened to YOU. I think we own our own stories.

chrisgeison
2016-12-14 21:42
Re: sessions where people cry: That’s beautiful, @steveportigal—that’s real and human. It hasn’t happened for me even once, but I’ve only been doing hour-long lab sessions and only for the past 2 months. The strangest experience so far was conducting interviews the day after the US election. It was…surreal.

*So the questions:* How are you creating enough “safety” with participants that they open up like that? What questions are you asking (i.e., How are you exploring a wine label, for example, in such a way that allows people to connect so deeply)?

steveportigal
2016-12-14 21:42
I’ve mostly left it to people to self-censor – and some have been concerned about mostly their work, like not getting into trouble.

steveportigal
2016-12-14 21:43
Lou Rosenfeld refers to the no-asshole rule sometimes – how would someone feel if they saw this?

alex.lee
2016-12-14 21:43
Someone here asked if we could create more opportunities to do user interviews. In part I am guessing because field work is considered expensive relative to other testing tools. What are ways to do more of that and justify its usefulness in business?

steveportigal
2016-12-14 21:44
NZer Nick Bowmast went to his participant and got his permission to include an image from his video diary (you can see it in the book but it has to do with this participant watching a movie on his device while driving on the highway – and doing a video diary of that)

steveportigal
2016-12-14 21:44
@chrisgeison asks how the heck does that happen. Yeah, it’s a good question.

h_bookforest
2016-12-14 21:44
I am not sure how to put this into words, but will give it a go. My sense on reading the paragraph about slide decks and everyone being so caught up in the product or service is that at that point the users have got a bit lost. How fair or inaccurate an impression is that to have? It’s like the sense I have sometimes when reading an article or published statement, and a thought intrudes along the lines of “Hm. They have been reading and believing their own press notices and advertising again!” It’s that slightly glossy unreality thing…… Sorry for finding the wrong words.

mallorychacon
2016-12-14 21:45
@steveportigal Im curious about your take on performing quant then qual or vice versa. In what cases to you recommend one approach over the other or one order of approach over the other?

steveportigal
2016-12-14 21:45
It’s easier to build rapport when you are in their home. You are on their territory, you can have a wide ranging conversation, you can pick up on their cues.

steveportigal
2016-12-14 21:45
I think of interviews often – and this is again will sound pretentious

steveportigal
2016-12-14 21:45
but that is a theme of mine

steveportigal
2016-12-14 21:46
anyway I think about Picasso who said the sculpture is in the marble and his job is to bring it out.

steveportigal
2016-12-14 21:46
You can conduct interviews like that, what is this person’s story, listening for the things that they want to tell you and following up following up following up – such that no interview looks like any other from a Questions point of view.

steveportigal
2016-12-14 21:46
They explore the same territory but in a totally different way.

steveportigal
2016-12-14 21:47
So you ask about wine. You hear about their lifestyle, and how they are socializing around wine, and they tell you what it means to them

steveportigal
2016-12-14 21:47
and you get another story and you ask about it, and you let them tell you, you guide and listen and followup

steveportigal
2016-12-14 21:47
And then when you bring bottles out to look at and “evaluate” you get a real personal story.

steveportigal
2016-12-14 21:47
I was damn surprised.

steveportigal
2016-12-14 21:48
Although I had a week earlier this year where we were exploring – honestly it was this big picture – how people find meaning in relationships with products and their passions

steveportigal
2016-12-14 21:48
and it created a place where we could dig into almost anything – and I started to see points of trigger where I could tread that was VERY personal -and sometimes I had to stop myself because it became tempting for maybe the wrong reasons – like I COULD go there

steveportigal
2016-12-14 21:48
but maybe I didn’t NEED to go there

steveportigal
2016-12-14 21:49
(that being said lots of things come up in an interview that I just let go because it’s not my business to ask)

coreyux
2016-12-14 21:49
it sounds like therapy…

hawk
2016-12-14 21:49
haha

steveportigal
2016-12-14 21:50
@coreyux actually THAT is also in the book – a story where someone started co-opting the session and it was clear they had a need I wasn’t qualified to address

steveportigal
2016-12-14 21:50
not was it appropriate

steveportigal
2016-12-14 21:50
and I shut it down

steveportigal
2016-12-14 21:50
SUPER needy person with some raw issues

lukcha
2016-12-14 21:50
Some people don’t get a chance to talk at that level very often with a dedicated listener.

steveportigal
2016-12-14 21:51
@mallorychacon I’ve seen great examples where quant informs qual and vice versa. I’m also seeing more teams set up where they are TOGETHER. Where data people and qual people and others work together

steveportigal
2016-12-14 21:51

chrisgeison
2016-12-14 21:51
@steveportigal: *Thank you.* My background is in clinical counseling. It’s been interesting to make the transition…

coreyux
2016-12-14 21:51
oh wow…

steveportigal
2016-12-14 21:51
I think this comes up with Alex Wright from Etsy and Greg Berstein then at Mailchimp

steveportigal
2016-12-14 21:52
@lukcha yes, one of the reasons I think user research can work is because we give the gift of listening and most of us could use more of it than we get and in some cases as you suggest

steveportigal
2016-12-14 21:52
it’s really extreme

steveportigal
2016-12-14 21:52
@alex.lee I think demonstrating value by demonstrating what is learned in one method versus another

steveportigal
2016-12-14 21:52
https://www.nngroup.com/articles/which-ux-research-methods/ is a great piece by Christian Rohrer that

steveportigal
2016-12-14 21:53
looks at various methods and what they are good at uncovering.

steveportigal
2016-12-14 21:53
Having a vocabulary to propose methods based on what is known and what isn’t known and what hypotheses you have

coreyux
2016-12-14 21:53
i was doing an interview of another ux designer about job seeking…and i could feel so much sadness… then i became her mentor lol

crystal
2016-12-14 21:53
@steveportigal Do you think that approaching the interview with some small vulnerablability of your own allows them to be more vulnerable as well and open up and give more insight? And have you found that added insight to often add value to the research?

coreyux
2016-12-14 21:54
but that sadness stuck with me, that empathy… or whatever you call it

steveportigal
2016-12-14 21:54
@h_bookforest I think yes, the users can get lost. We do research CHECK

steveportigal
2016-12-14 21:54
(I should use the emoji I guess :heavy_check_mark:

steveportigal
2016-12-14 21:54
We got the DATA

steveportigal
2016-12-14 21:54
MAKE THE RECOMMENDATIONS – make sure it’s only three

steveportigal
2016-12-14 21:54
make it actionable

steveportigal
2016-12-14 21:54
so research I think has taken off but it isn’t always at the level that it could be

h_bookforest
2016-12-14 21:55
@steveportigal Thank you.

steveportigal
2016-12-14 21:55
@crystal our own vulnerability – that’s fascinating and I don’t have a clear take on that. I think a shallow reading

steveportigal
2016-12-14 21:55
says being vulnerable means sharing about ourselves and I am mostly against doing that most of the time for most researchers

alex.lee
2016-12-14 21:55
@steveportigal: that is super helpful!

steveportigal
2016-12-14 21:56
but it makes me ponder what’s a richer more nuanced sense of what our own vulnerability is, if by being still

steveportigal
2016-12-14 21:56
present,

steveportigal
2016-12-14 21:56
focused

steveportigal
2016-12-14 21:56
listening

steveportigal
2016-12-14 21:56
and not needing to make it about us

steveportigal
2016-12-14 21:56
we might convey some vulnerability

steveportigal
2016-12-14 21:56
I think it’s meeting people where they are, accepting them where they are

steveportigal
2016-12-14 21:56
and not putting ourself into it.

steveportigal
2016-12-14 21:57
which – to your point – feels DAMN risky to a lot of people. Set aside your agenda and listen

steveportigal
2016-12-14 21:57
but do so in a productive effective you’re-on-the-job way.

steveportigal
2016-12-14 21:57
so you are balancing different forces and risks.

steveportigal
2016-12-14 21:57
I dunno, is that ‘vulnerable’

steveportigal
2016-12-14 21:57

steveportigal
2016-12-14 21:58

hawk
2016-12-14 21:58
And that’s probably a good note to wrap up on

lukcha
2016-12-14 21:58
That’s a great podcast episode

rohanirvine
2016-12-14 21:58
@steveportigal How have research practices been changing over the last couple years? Have there been steps in the right and wrong directioon and what are they?

steveportigal
2016-12-14 21:58
One more one more

hawk
2016-12-14 21:59
haha. thanks so much for your time @steveportigal – you rocked it. I didn’t even notice the car sickness.

steveportigal
2016-12-14 21:59
If you want to contribute a story – get in touch

hawk
2016-12-14 21:59
sure

steveportigal
2016-12-14 21:59
that’s high praise, I didn’t vomit when you were talking.

rohanirvine
2016-12-14 21:59
Thanks so much for you time!

crystal
2016-12-14 21:59
@steveportigal yes that was the type of small vulnerablability I had in mind. I was not intending to mean personal details of our own

chrisgeison
2016-12-14 21:59
@steveportigal: *Thank you!*

maadonna
2016-12-14 21:59
Thansk Steve!!

lukcha
2016-12-14 21:59
haha

chrisoliver
2016-12-14 21:59
Thanks!

cindy.mccracken
2016-12-14 21:59
That was great- thanks!

h_bookforest
2016-12-14 21:59
Thank you!

hawk
2016-12-14 21:59
Thanks to you all for joining us as well.

alex.lee
2016-12-14 21:59
Thank you

steveportigal
2016-12-14 22:00
thanks everyone

crystal
2016-12-14 22:00
Another great season. Thanks @hawk and @steveportigal !

hawk
2016-12-14 22:00
I’ll post a transcript of the session up on our website tomorrow

mallorychacon
2016-12-14 22:01
@steveportigal thank you!

shasha
2016-12-14 22:06
has joined #ask-steve-portigal

starback
2016-12-14 22:17
has joined #ask-steve-portigal

mallorychacon
2016-12-14 23:10
Thank you @hawk!

hawk
2016-12-14 23:11
Any time. That was the last session for the year so we’ll see you all back after the holidays. :)

The post Transcript: Ask the UXperts: <em>Learning from the comic, tragic & astonishing moments in user research</em> — with Steve Portigal appeared first on UX Mastery.

]]>
https://uxmastery.com/transcript-user-research-steve-portigal/feed/ 0 49737