Amanda Stockwell – UX Mastery https://uxmastery.com The online learning community for human-centred designers Thu, 10 Sep 2020 07:30:49 +0000 en-AU hourly 1 https://wordpress.org/?v=6.2.2 https://uxmastery.com/wp-content/uploads/2019/12/cropped-uxmastery_logotype_135deg-100x100.png Amanda Stockwell – UX Mastery https://uxmastery.com 32 32 170411715 Choosing the Right UX Research Method https://uxmastery.com/choosing-right-ux-research-method/ https://uxmastery.com/choosing-right-ux-research-method/#comments Fri, 26 Jan 2018 05:22:07 +0000 http://uxmastery.com/?p=63885 As more and more organisations become focused on creating great experiences, more teams are being tasked with conducting research to inform and validate user experience objectives.

UX research can be extremely helpful in crafting a product strategy and ensuring that the solutions built fit users’ needs. But it can be hard to know how to get started. This article covers all the basics: from setting research objectives to choosing the method so you can uncover the information you need.

The post Choosing the Right UX Research Method appeared first on UX Mastery.

]]>
As more and more organisations become focused on creating great experiences, more teams are being tasked with conducting research to inform and validate user experience objectives.

UX research can be extremely helpful in crafting a product strategy and ensuring that the solutions built fit users’ needs, but it can be hard to know how to get started.  This article will show you how to set your research objectives and choose the method so that you can uncover the information you need.

When to do research

The first thing to know is that there is never a bad time to do research. While there are many models and complicated diagrams to describe how products get built, essentially, you’re always in one of three core phases: conceptualising something brand new, in the middle of designing and/or building something, or assessing something that’s already been built.

There’s plenty to learn in each of those phases. If you’re just starting out, you need to focus on understanding your potential users and their context and needs so that you can understand your best opportunities to serve them. In other words, you’re trying to figure out what problems to solve and for whom. This is often called generative or formative research.

Research can add value at any stage, whether that’s conceptualising, designing or refining.

Once you’re actively building something, you’ll shift your focus to analysing the solutions that you’re coming up with, and making sure that they address the needs of your users. You’ll want to assess both conceptual fit and specific interactions quality.  We usually call this evaluative research.

When you have a live product or service, you’ll want to continue to assess how well you’re serving people’s needs, but you’ll also want to use research to discover how people change and how you can continue to provide value. At this point, you’ll be doing a mix of the generative type of work that is generally in the conceptual phase and evaluative work.

There is no cut-and-dried guide of exactly what methods to employ when, but there should never be a time that you can’t find an open question to investigate.

Determine your specific research objectives

At any given time, your team might have dozens of open questions that you could explore. I recommend keeping a master list of outstanding open questions to keep track of possible research activities, but focusing on answering just one open question at a time. The core goal of a study will determine which method you ultimately use.

If you need help coming up with research goals, consider things like:

  • the stage of the project you’re in
  • what information you already know about your users, their context, and needs
  • what your business goals are
  • what solutions already exist or have been proposed
  • or where you think there are existing issues.

The questions might be large and very open, like “who are our users?” or more targeted things like “who uses feature x most?” or “what colour should this button be?” Those are all valid things to explore, but require totally different research methods, so it’s good to be explicit.

Once you’ve identified open questions, you and the team can prioritise which things would be riskiest to get wrong, and therefore, what you should investigate first. This might be impacted by what project phase you’re in or what is currently going on in the team. For instance, if you’re in the conceptual phase of a new app and don’t have a clear understanding of your potential user’s daily workflows yet, you’d want to prioritize that before assessing any particular solutions.

From your general list of open questions, specify individual objectives to investigate. For instance, rather than saying that you want to assess the usability of an entire onboarding workflow, you might break down the open questions into individual items, like, “Can visitors find the pricing page?” and “Do potential customers understand the pricing tiers?”

You can usually combine multiple goals into a single round of research, but only if the methods align. For instance, you could explore many different hypotheses about a proposed solution in a single usability test session. Know that you’ll need to do several rounds of different types of research to get everything answered and that is totally OK.

Looking at data types

After determining your research goal, it’s time to start looking at the kind of information you need to answer your questions.

There are two main types of data: quantitative and qualitative.

Quantitative data

Quantitative data measures specific counts collected, like how many times a link was clicked or what percentage of people completed a step. Quantitative data is unambiguous in that you can’t argue what is measured. However, you need to understand the context to interpret the results.

Quantitative data helps us understand questions like: how much, how many and how often?

For instance, you could measure how frequently an item is purchased. The number of sales is unchangeable and unambiguous, but whether 100 sales is good or bad depends on a lot of things. Quantitative research helps us understand what’s happening and questions like: how much, how many, how often. It tends to need a large sample size so that you can feel confident about your results.

Common UX research methods that can provide quantitative data are surveys, a/b or multivariate tests, click tests, eye tracking studies, and card sorts.

Qualitative data

Qualitative data is basically every other sort of information that you can collect but not necessarily measure. These pieces of information tend to provide descriptions and contexts, and are often used to describe why things are happening.

Qualitative data needs to be interpreted by the researcher and the team and doesn’t have a precise, indisputable outcome. For instance, you might hear people talk about valuing certain traits and note that as a key takeaway, but you can’t numerically measure or compare different participant’s values. You don’t need to include nearly as many sessions or participants in a qualitative study.

Common UX research methods that can provide qualitative data are usability tests, interviews, diary studies, focus groups, and participatory design sessions.

Some methods can produce multiple types of data. For instance, in a usability study, you might measure things like how long it took someone to complete a task, which is quantitative data, but also make observations about what frustrated them, which is qualitative data. In general, quantitative data will help you understand what is going on, and qualitative data will give you more context about why things are happening and how to move forward or serve better.

Behavioural vs attitudinal data

There is also a distinction between the types of research where you observe people directly to see what they do, and the type where you ask for people’s opinions.

Any direct-observation method is known as behavioural research. Ethnographic studies, usability tests, a/b tests, and eye tracking are all examples of methods that measure actions. Behavioral research is often thought of as the holy grail in UX research, because we know that people are exceptionally bad at predicting and accurately representing their own behaviour. Direct observation can give you the most authentic sense of what people really do and where they get stuck.

By contrast, attitudinal research like surveys, interviews, and focus groups asks for self-reported information from participants. These methods can be helpful to understand stated beliefs, expectations, and perceptions. For instance, you might interview users and find that they all wish they could integrate your tool with another tool they use, which isn’t necessarily an insight you’d glean from observing them to perform tasks in your tool.

It’s also common to both observe behaviour and ask for self-reported feedback within a single session, meaning that you can get both sorts of data, which is likely to be useful regardless of your open question.

Other considerations

Even after you’ve chosen a specific research method, there are a few more things you may need to consider when planning your research methods.

Where to conduct

It’s often ideal to be able to perform research in the context of how a person normally would use your product, so you can see how your product fits into their life and observe things that might affect their usage, like interruptions or specific conditions.

For instance, if you’re working on a traffic prediction application, it might be really important to have people test the app while on their commute at rush hour rather than sitting in a lab in the middle of the day. I recently did some work for employees of a cruise line, and there would have been no way to know how the app really behaved until we were out at sea with satellite internet and rolling waves!

Context for research is important. If you can, get as close as possible to a real scenario of when someone would use your product.

You might have the opportunity to bring someone to a lab setting, meet them in a neutral location, or even intercept them in a public setting, like a coffee shop.

You may also decide to conduct sessions remotely, meaning that you and the participant are not in the same location. This can be especially useful if you need to reach a broad set of users and don’t have travel budget or have an especially quick turnaround time.

There is no absolute right or wrong answer about where the sessions should occur, but it’s important to think through how the location might affect the quality of your research and adjust as much as you can.

Moderation

Regardless of where the session takes place, many methods are traditionally moderated, meaning that a researcher is present during the session to lead the conversation, set tasks, and dig deeper into interesting conversation points. You can tend to get the richest, deepest data with moderated studies. But these can be time-consuming and require a good deal of practice to do effectively.

You can also collect data when you aren’t present, which is known as unmoderated research. There are traditional unmoderated methods like surveys, and variations of traditional methods, like usability tests, where you set tasks for users to perform on their own and ask them to record their screen and voice.

Unmoderated research takes a bit more careful planning because you need to be especially clear and conscious of asking neutral questions, but you can often conduct them faster, cheaper, and with a broader audience traditionally moderated methods. Whenever you do unmoderated research, I strongly suggest doing a pilot round and getting feedback from teammates to ensure that instructions are clear.

Research methods

Once you’ve thought through what stage of the product you’re in, what your key research goals are, what kind of data you need to collect to answer your questions, and other considerations, you can pinpoint a method that will serve your needs. I’ll go through a list of common research methods and their most common usages.

Usability tests: consist of asking a participant to conduct common tasks within a system or prototype and share their thoughts as they do so. A researcher often observes and asks follow up questions.

Common usages: Evaluating how well a solution works and identifying areas to improve.

UX interview: a conversation between a researcher and a participant, where the researcher usually looking to dig deep into a particular topic. The participant can be a potential end user, a business stakeholder or teammate.

Common usages: Learning basics of people’s needs, wants, areas of concern, pain points, motivations, and initial reactions.

Focus groups: similar to interviews, but occur with multiple participants and one researcher. Moderators need to be aware of potential group dynamics dominating the conversation, and these sessions tend to include more divergent and convergent activities to draw out each individual’s viewpoints.

Common usages: Similar to interviews in learning basics of people’s needs, wants, areas of concern, pain points, motivations, and initial reactions. May also be used to understand social dynamics of a group.

Surveys: lists of questions that can be used to gather any type of attitudinal behaviour.

Common usages: Attempting to define or verify scale of outlook among larger group

Diary study: a longitudinal method that asks participants to document their activities, interactions or attitudes over a set period of time. For instance, you might ask someone to answer three questions about the apps they use while they commute every day.

Common usages: Understanding the details of how people use something in the context of their real life.

Card sortsa way to help you see how people group and categorise information. You can either provide existing categories and have users sort the elements into those groupings or participants can create their own.

Common usages: Help inform information architecture and navigation structures.

Tree tests: the opposite of card sorts, wherein you provide participants with a proposed structure and ask them to find individual elements within the structure.

Common usages: Help assess a proposed navigation and information architecture structure.

A/B testing: Providing different solutions to audiences and measuring their actions to see which better hits your goals.

Common usages: Assess which of two solutions performs better.

Christian Rohrer and Susan Farrell also have great cheat sheets of best times to employ different UX research methods.

Wrapping up

To get the most out of UX research, you need to consider your project stage, objectives, the type of data that will answer your questions, and where you want to conduct your research.

As with most things in UX, there is no one right answer for every situation, but after reading this article you’re well on your way to successfully conducting UX research.

Want to dive deeper into UX research methods? Try Amanda’s latest course, Recruiting and Screening UX Research Participants on Skillshare with 2 months’ free access.  

The post Choosing the Right UX Research Method appeared first on UX Mastery.

]]>
https://uxmastery.com/choosing-right-ux-research-method/feed/ 5 63885
Getting Started with Popular Guerrilla UX Research Methods https://uxmastery.com/popular-guerrilla-ux-research-methods/ https://uxmastery.com/popular-guerrilla-ux-research-methods/#respond Fri, 03 Nov 2017 02:41:37 +0000 http://uxmastery.com/?p=61865 Amanda's last article covered how to “guerilla-ise” traditional UX research methods to fit into a short timeline, and when it makes the most sense to use them.

Now, she's back to walk us through some of the most popular guerilla methods—live intercepts, remote and unmoderated studies, and using low fidelity prototypes. She covers pros, cons and tips to make sure you make the most of your guerilla research sessions.

The post Getting Started with Popular Guerrilla UX Research Methods appeared first on UX Mastery.

]]>
In my last article, I talked about how you can “guerilla-ise” traditional UX research methods to fit into a short timeline, and when it makes the most sense to use them. Read the post here.

This time, I’ll walk you through some of the most popular guerilla UX research methods: live intercepts, remote and unmoderated studies, and using low fidelity prototypes.

I’ll cover pros, cons and tips to make sure you make the most of your guerilla research sessions.

Conducting research in public

Often the go-to guerilla technique is to skip the formal participant recruitment process and ask members of the public to take part in your research sessions. Live intercepts are often used as shortened versions of usability tests or interviews.

Getting started

Setting up is easy—all you need is a public space where you can start asking people for a few minutes to give you feedback. A cafe or shopping centre usually works well. 

This is a great way to get lots of feedback quickly, but approaching people takes a little courage and getting used to. 

I find it helps to put up a sign that publicises the incentive you’re offering, and if possible, identifying information like a company logo. This small bit of credibility makes people feel more comfortable.

Make sure you have a script prepared for approaching people. You don’t need to stick to it every time, but make sure you mention where you work or who your client is, your goal is, their time commitment and their compensation.

Try something like:

Hi, I’m [firstname] and I’m working for [x company] today. We’re trying to get some feedback on [our new feature]. If you have about [x minutes] to chat, I can offer you a [gift card/incentive].

Be sure to be friendly, but not pushy. Give people the chance to opt out or come back later. Pro tip: I always take a piece of paper with time slots printed so that people can sign up for a later time.  

The location you choose has a major impact on how many people you talk to and the quality of your results. Here are some tips for picking a good spot:

  • Pick a public place where there will be a high volume of people and make sure you get permission to be there. Aim to be visible but not in the way. A table next to the entrance works well.
  • Try to pick a place that you think your target audience will be. For instance, if you’re interested in talking to lawyers, pick a coffee shop near a big law office.
  • Look for stable wi-fi and plentiful wall plugs.
  • Regardless of where you choose, stake out the location ahead of the research session so you can plan accordingly.

A few limitations

There’s no doubt that intercepting people in public is a great way to get a high volume of participants quickly. Talking to the general population, however, is best reserved for situations when you have a product or service that doesn’t require specific knowledge, contexts, or outlooks.

If you’re doing a usability test, you could argue that whatever you build should be easy enough for anyone to figure out, so you can still get feedback. Just be aware that you may miss out on valuable insights that are specific to your target audience.

Let’s say you’re working on a piece of tax software. A risk is that you end up talking to someone who has a spouse that handles all the finances, or miss finding a labelling error that only tax accountants would know to report.

To avoid this, I always recommend asking a few identifying questions at the beginning of each session so you can analyse results appropriately. You don’t always need to screen people out, but you can choose how to prioritise their feedback in the analysis stage.

Context also matters. If you usability test a rideshare app on a laptop in a coffee shop, but most people will use the app on their phones on a crowded street, you may get misleading feedback.

Watch for bias when user-testing in a cafe. Photo via Unsplash

You should also be aware that you may run into bias by intercepting all your participants from one location. Think about it: the people that are visiting an upscale coffee shop in a business centre on a weekday are likely to be pretty different than the people who are stopping at a gas station for coffee in the middle of the night. Again, try to choose your intercept location based on your target audience and consider going to a few locations to get variety.

Keep in mind that only a certain type of person is going to respond positively and take the time to give you feedback. Most people will be caught off guard, and may be suspicious or unsure what to expect. You won’t have much time to give participants context or build rapport, so be especially conscious of making them feel comfortable.

Some final tips:

  • Set expectations clearly. Tell participants right away how long you’ll talk to them and how you’ll compensate them for their time. Be clear about what questions you’ll ask or tasks you’ll present and what they need to do.
  • Pay extra attention to participant comfort. Give them the option to leave at any time and put extra emphasis on the fact that you’re there to gather feedback, not judge them or their abilities. Try to record the sessions or not take notes the whole time, so you can make eye contact and read body language.
  • Remember standard rules of research: don’t lead participants, get comfortable with silence, and ask questions that participants can easily answer. Be extra careful asking about sensitive topics such as health or money. In fact, I don’t recommend intercepting people if you need to talk about very sensitive topics.

Remote and unmoderated studies

Taking the researcher out of the session is another proven way to reduce the time and cost of research. This is achieved through running remote and unmoderated research sessions.

Getting started

Traditional research assumes that a researcher is directly conducting sessions with participants, or moderating the sessions. Unmoderated research just means that the participants respond without the researcher present. Common methods include diary studies, surveys or trying out predetermined tasks in a prototype.

The core benefit is that people can participate simultaneously so you can collect many responses in a short amount of time. It’s often easier to recruit too, because there are no geographic limitations and participants don’t have to be available at a specific time.

You plan unmoderated research much like you do moderated research: set your research goal, select an appropriate method to answer your open questions, determine participants, and craft your research plan. The difference in unmoderated sessions is that you need to be especially careful about setting expectations and providing clear directions, because you won’t be there during the session. Trial runs are especially important in unmoderated sessions to catch unclear wording and confusing tasks.

You can also conduct remote research, which means that you’re not physically in the same place as your participant. You can use video conferencing tools to see each other’s faces and share screens. Remote sessions are planned in a similar vein to in-person sessions, but you can often reach a broader set of people when there are no geographic limits.

A few limitations

Any time you conduct sessions remotely or choose unmoderated methods, you run the risk of missing out on observing context or reading body language. With unmoderated sessions, can’t dig deeper when someone has an interesting piece of feedback. That’s still better than not collecting data, but you should take it into consideration when you’re analysing your data and making conclusions.

Low fidelity prototypes

If you want to invest less effort upfront, and iterate quickly, low fidelity prototypes are a good option.

In this scenario, you forego fully functional prototypes or live sites/applications and instead use digitally linked wireframes or static images.

You can even use paper prototypes, where you sketch a screen on paper and simulate the interaction by switching out which piece of paper is shown.

Getting started

Low fidelity prototypes, especially paper, are less time consuming to make than digital prototypes, which makes them inexpensive to produce and easy to iterate. This sort of rapid cycling is especially useful when you’re in the very early conceptual stages and trying to sort out gut reactions.

You run a usability test with a low fidelity prototype just like you would run any other usability test. You come up with tasks and scenarios that cover your key questions, recruit participants, and observe as people perform those tasks.

A few limitations

For this guerrilla technique, you have to be especially careful to ask participants to think aloud and not lead or bias them, because there can be a huge gap in their expectations and yours. For paper prototypes in particular, a moderator must be present to simulate the interactions. I recommend in-person sessions for any sort of test with low fidelity prototypes.

Keep in mind that you can get false feedback from low-fidelity wireframe testing. It can be difficult for participants to imagine what would really happen, and they may get stuck on particular elements or give falsely positive feedback based on what they imagine. Take this into consideration when analysing the results, and be sure that you conduct multiple rounds of iterative research and include high-fidelity prototypes or full beta tests in your long-term research plan.

Wrapping up

When in doubt about the results of any guerilla research test, I recommend running another study to see if you get the same results.

You can execute the exact same test plan, or even try to answer the same question with a complementary method. If you arrive at similar conclusions, you can feel more confident, and if not, you’ll know that you need to keep digging. When you’re researching guerilla style, you can always find more time to head back to the jungle for more sessions.

Take a look at my article linked below for tips on reducing scope, and the best times to use guerilla methods. Happy researching!

Further reading

The post Getting Started with Popular Guerrilla UX Research Methods appeared first on UX Mastery.

]]>
https://uxmastery.com/popular-guerrilla-ux-research-methods/feed/ 0 61865
Going Guerrilla: How to Fit UX Research into Any Timeframe https://uxmastery.com/guerrilla-ux-research/ https://uxmastery.com/guerrilla-ux-research/#respond Thu, 19 Oct 2017 04:51:29 +0000 http://uxmastery.com/?p=61304 As more and more companies realise the value of UX research, “guerilla” methods have become a popular way to squeeze research into limited budgets and short timelines. This often means reducing scope and/or rigor. The key to successful guerilla research is to strike the right balance to hit time and budget goals, but still be rigorous enough to gather valuable feedback.

So when is the best time to tackle your research guerilla style?

The post Going Guerrilla: How to Fit UX Research into Any Timeframe appeared first on UX Mastery.

]]>
As more and more companies realise the value of UX research, “guerrilla” methods have become a popular way to squeeze research into limited budgets and short timelines. Those of us working in agile sprints often have even less dedicated time for research.

When I say guerrilla research, I don’t mean go bananas or conduct jungle warfare research. Guerrilla research is really just a way to say that you’ve taken a regular UX research method and altered it to reduce time and cost.

To do so, you often end up reducing scope and/or rigour. The key to successful guerrilla research is to strike the right balance to hit time and budget goals, but still be rigorous enough to gather valuable feedback.

Read on for a framework for reducing any research method and an overview of the best time to use guerrilla tactics.

If you’re looking for practical advice on using guerilla research methods, take a look at my second article: Getting Started with Popular Guerrilla UX Research Methods

Crafting your guerilla plan

You can “guerrilla-ise” any UX research method, and there’s almost never one single correct way to do so. That said, qualitative techniques like usability tests and interviews lend themselves especially well to guerrilla-isation.

The easiest way I’ve found to plan guerrilla research is to start by determining how you’d do the research if you had desired time and budget. Then work backwards to find the elements you can toggle to make it work for the situation. The first place I look to cut is scope of the research question.

Let’s say your team is working on a new healthcare application and wants to assess the usability of the entire onboarding process. That’s an excellent goal, but pretty broad. Perhaps you could focus your study just on the first few steps of the signup process, but not the follow-up tutorial, or vice versa.

Once you’ve narrowed down your key research goals, you can start looking at what sorts of methods will answer your questions. The process for choosing a research method is the same, regardless of whether you’re trying to go guerrilla or not. For a great summary of choosing a method, take a look at Christian Rohrer’s excellent summary on NNG’s blog or this UX planet article.

Besides narrowing the scope of your research goal, think about the details that make up a study. This includes questions such as:

  • What do you need to build or demonstrate?
  • How many sessions or participants do you need?
  • How will you recruit them?
  • What’s the context of the studies?

Then you can take a look at all those elements, identify where your biggest time and money costs are, and prioritise elements to shift.

Reducing scope

Let’s say, for example, that you determine the ideal way to test the onboarding flow of your new app is to conduct 10 one-hour usability sessions of the fully functional prototype. The tests will take place in a lab and you’ll have a participant-recruitment firm find participants that represent your main persona.

There are many ways you could shift to reduce time and costs in this example.

You could:

  • Run test sessions remotely instead of in a lab
  • Reduce the number of sessions overall
  • Run unmoderated studies
  • Build a simpler wireframe or paper prototype
  • Recruit participants on social media
  • Intercept people in a public location
  • Or a combination of these methods

To decide what to alter, look at what will have the biggest impact on time, budget, and validity of your results.

For example, if working with a recruiting firm will be time consuming and expensive, you’ll want to look for alternative ways to recruit. Intercepting people in public is what many of us envision when we think of guerrilla research. You could do that, or you could also find participants on social media or live-intercept them from a site or web app.

You may even decide to combine multiple guerilla-ising techniques, such as conducting fewer sessions and doing so remotely, or showing a simple prototype to people who you intercept.

Just remember, you don’t want to reduce time and effort so much that you bias your results. For instance, if you’re doing shorter sessions or recruiting informally, you may want to keep the same overall number of sessions so you have a reasonable sample size.

Best uses for guerrilla research

So, when is the best scenario to use guerrilla tactics in your research?

  • You have a general consumer-facing product which requires no previous experience or specialty knowledge OR you can easily recruit your target participants
  • You want to gather general first-impressions and see if people understand your product’s value
  • You want to see if people can perform very specific tasks without prior knowledge
  • You can get some value out of the sessions and the alternative is no research at all

And when should you avoid guerrilla methods?

  • When you’ll be researching sensitive topics such as health, money, sex, or relationships
  • When you need participants to have very specific domain knowledge
  • When the context in which someone will use your product will greatly impact their usage and you can’t talk to people in context
  • When you have the time or budget to do more rigorous research!

Guerrilla research is a great way to fit investigation into any timeframe or budget. One of its real beauties is that you can conduct multiple, iterative rounds of research to ensure you’re building the right things and doing so well.

If you have the luxury of conducting more rigorous research, take advantage, but know that guerrilla research is always a better option than no research at all.

Read the next article on getting started with common guerrilla techniques.

The post Going Guerrilla: How to Fit UX Research into Any Timeframe appeared first on UX Mastery.

]]>
https://uxmastery.com/guerrilla-ux-research/feed/ 0 61304
Is Freelancing Your Next UX Career Move? https://uxmastery.com/freelancing-ux-career/ https://uxmastery.com/freelancing-ux-career/#respond Tue, 04 Jul 2017 07:26:04 +0000 http://uxmastery.com/?p=59093 Freelancing is well-suited to the work of UX professionals, with many considering the move as a next career step. They usually say they’re looking for freedom, more money, the ability to work on more interesting problems or learn new things - or even just the chance to work in their pyjamas regularly.

Here are a few important tips to consider before you quit your day job.

The post Is Freelancing Your Next UX Career Move? appeared first on UX Mastery.

]]>
I’ve been working in for UX for about a decade and freelanced on and off for about five years. Last year, I started consulting full time again, abandoning the 9-5 life and making my main source of employment a series of projects from different clients.

Since then, I’ve had countless conversations with other UX professionals who are considering moving to freelancing as their next career step.

They usually say they’re looking for freedom, more money, the ability to work on more interesting problems or learn new things – or even just the chance to work in their pyjamas regularly. Since I started consulting, I’ve expanded my focus into more strategy and product work across a wide variety of industries, and have met all kinds of interesting, smart colleagues.

While I wouldn’t change a thing about my situation, I’m always cautious about encouraging others to jump into the freelance world, because it’s definitely not for everyone. Here are some things I’ve learned along the way to consider before you quit your day job.

Freelancing pros and cons

Yes, it’s true that I spend most days in yoga pants, travel frequently, and work on some pretty cool projects. But consulting isn’t all sunshine and rainbows. There are many things that are great about consulting, some things that aren’t so great, and some things that just depend on the day.

Freelancing pro: setting up your own home office. Photo by Vadim Sherbakov on Unsplash.

Schedule: One of the things I love most about consulting is being in total control of my schedule. There’s no expectation that I’m at my desk from 9 am to 5 pm. Sometimes I get in a zone and finish an entire report in one very long day and take the next morning off to let ideas percolate. Of course I have meetings and deadlines, but I can usually finagle things to work well for my clients and myself.

On the flip side, juggling your schedule can be difficult. I’m quite disciplined about getting things done, but I’ve never quite figured out a way to create a consistent schedule. Sometimes I’ve had priority work on different projects collide at the same time or I’ve said yes to a few too many things and ended up working crazy hours to get everything done. Such is a consultant’s life.

Money: I absolutely find that I make more money freelancing than when I worked in-house, even in leadership roles. I’m still experimenting with how I bill, but I tend to use value-based pricing for entire projects rather than charge hourly. I often ask clients to pay me 50% of the total upfront and the rest upon project completion, which can make managing money tricky, especially when you’re used to a consistent paycheck.

You also have to know that you’ll never be able to collect money as though you’re billing 40 hours a week, every week. You have to account for supplies, tool costs, benefits, and set aside time for administrative tasks like sending invoices, business development, and, of course, downtime. No one gives you sick time or vacation days when you freelance.

Inevitably, I’ve also run into time periods where I don’t have anything billable booked. An open schedule can be scary, so I use this time to do things like reach out to colleagues or potential clients, write articles, research new tools, try to learn something new, or catch up on administrative work. One of the benefits of this downtime is the space to learn, exploring the vast array of available online courses or just experimenting with a new method or tool.

Work environment: I mostly work from home, which means I wear what I like, can pet my dog throughout the day, don’t contend with traffic, and for better or worse, have all day access to my kitchen. All great, but it also means that sometimes the only person I see in the flesh each day is my husband.

I have to make an extra concerted effort to hang out with colleagues, so I’ve become more active in my local meetups and groups (shout out to Ladies that UX Durham – love yall!) I also found I need to be more social during the week, even if that means going to a fitness class instead of biking solo or talking on the phone while I shop.

What services will you provide?

I’ve been focused on research and strategy for most of my career. I won’t rehash the “Should I be a unicorn?” or “Do designers need to code?” debates, but I’ll admit that when I first starting freelancing, I was worried I wasn’t going to be able to find enough work without doing visual work. Turns out, I was totally wrong. Phew!

Don’t worry, you don’t need to be a unicorn to be a freelance UXer.

You absolutely don’t need to be a unicorn or try to tackle projects that aren’t your speciality. But it helps to have a broad set of experience and at least one area of deep expertise you can market and use to define your services. You can match what you’re good at and what you like to define the kind of projects you target, the projects you’re OK with taking, and what you will certainly turn down.

Clearly defining your services and interests are important because it tells other people what to turn to you for. if I know someone has great interaction design skills and tonnes of experience with financial products, I’ll suggest them anytime I see a project like that. If someone tells me they’ll do anything that comes their way, they probably won’t come to mind for any projects I know about.

How will you find clients?

This is a question that I get asked time after time, and the answer is incredibly simple in concept but hard in practice: treat your clients and colleagues as you would users and provide them with a good experience working with you.

More specifically, you have to do good work and other people have to be willing to talk about it. This can either mean that your clients are pleased with your work and will re-hire you or tell other potential clients, or that your peers in UX like your work and can refer you when they need help or can’t take something on. It really is true that a huge amount of success in consulting is based on networking and who you know, but that only works to your advantage if the people you know have had a good experience working with you. Right now, every single one of my clients is someone I’ve previously worked with or have gotten a good reference.

This is where, once again, it helps to have a clearly defined, slightly unique set of skills or interests. There are tonnes of researchers and tonnes of designers, but if you’re known as a researcher who loves qualitative work and medical products, people will think of you whenever they come across that kind of project. Just be sure that you don’t define yourself too narrowly.

You can also use job boards to identify potential projects or try recruiting agencies, but I haven’t found either as fruitful as having my name passed on from a previous contact. More on finding freelance UX work here.

Are you really suited for all that?

Even if all the potential pros sound amazing to you and you have the skills and network to pull off freelancing, take a moment to reflect on your personality and soft skills.

Are you detailed, organised, and willing to juggle many different client requests and manage your own schedule? Are you a natural risk-taker who can cope well with slow periods or lack of viable work? Are you assertive enough to negotiate terms for yourself? Do you mind working by yourself a lot?

There’s a lot to think about before leaving the security of a full-time job. I love freelancing, but it’s worth carefully considering the pros and cons and your skillset and personality before taking the leap. Best wishes for whichever path you choose! 

Do you have experience or tips on freelancing for UX professionals? Leave a comment on the blog or in the forums! 

The post Is Freelancing Your Next UX Career Move? appeared first on UX Mastery.

]]>
https://uxmastery.com/freelancing-ux-career/feed/ 0 59093
Pivot or Persevere? Find Out Using Lean Experiments https://uxmastery.com/pivot-or-persevere-find-out-using-lean-experiments/ https://uxmastery.com/pivot-or-persevere-find-out-using-lean-experiments/#respond Wed, 24 May 2017 13:43:38 +0000 http://uxmastery.com/?p=54333 The Lean Startup approach is gaining popularity in organisations of all sizes, which means teams must adapt their processes. More and more, UX professionals are being asked to take on Lean experiments - which are fantastic - but differ slightly from traditional UX research. This guide will help you get the most out of your experimentation cycles and understand whether you should pivot or persevere with your MVP.

The post Pivot or Persevere? Find Out Using Lean Experiments appeared first on UX Mastery.

]]>
The Lean Startup approach is gaining popularity in organisations of all sizes, which means teams must adapt their processes. More and more, UX professionals are being asked to take on Lean experiments – which are fantastic – but differ slightly from traditional UX research.

To recap, “Lean Startup” is a business approach that calls for rapid experimentation to reduce the risks of building something new. The framework has roots in the Lean Manufacturing methodology and mirrors the scientific method. It calls for very UX-friendly processes, such as collecting iterative feedback and focusing on empirical measurement of performance indicators.

One of the core principles is to iterate through a cycle known as Build-Measure-Learn, which includes building a minimum viable product (MVP) to test, measure what happens, and then decide whether to move forward with the suggested solution (persevere) or find another (pivot).

Simple in theory. But it can be challenging to figure out what MVP to build, how to interpret the data collected and what next steps should be after completing a lean experiment. These guidelines will help you get the most out of your experimentation cycles and understand whether you should pivot or persevere.

Consider the context

The most important part of data analysis starts before you’ve gathered any data. To help you decide what type of research to do, you first need to consider where you are in the progress of your product, what information you already have, and what are the biggest, riskiest open questions.

In the conceptual stages of a totally new business or feature idea, you first need to understand enough about your potential user base and their needs to make informed hypotheses about the problems they have and how you might be able to address them. Any idea for a new thing is an assumption, and doing some generative research will help you shape and prioritise your assumptions.

The Lean Startup approach advocates for starting with a process called GOOB – Getting Out Of the Building – and looks a whole lot like a condensed version of traditional ethnography and interviews. The goal is to talk to a small number of people who you think fit your target audience and understand their current needs, experience gaps, pain points, and methods for solving existing problems related to your idea.

Run these interviews just like any other UX interview and use the data to create a list of assumptions about your target users, potential problems to solve, and ways you could address those problems.  Start with a period of exploration and learning before you build anything.

Prioritising what to explore

Your list of assumptions can serve as your backlog of work. Rather than creating a list of necessary features to build, treat each item in the list as a separate hypothesis to explore and either prove or disprove. Then, prioritise the hypotheses that are the riskiest, or would have the biggest impact if your assumption is wrong. Assumptions about what the problem is and for what people should be higher in priority over assumptions about how to solve any problems or build any features.

Typical assumptions might look something like this:

I believe [___] set of people are facing [___] challenge.

I believe [___] solution could help address [___] problem better than my users’ current workaround.

I believe [___] solution could generate money in [___] way.

For instance, let’s say that you’re trying to create a new application to help busy parents plan meals. You’ve interviewed a dozen busy parents and have some insight that says the two biggest issues they face are deciding what to cook and finding time to buy all the ingredients/groceries.You might have a hunch about which direction to go, but your first test should be centred around figuring out which of these issues is more compelling to your users.

Setting hypotheses

The next step is to craft a precise hypothesis that will make it very easy to tell whether you’ve proved or disproved your assumption.

I like to use the following framework for creating hypotheses:

The do, build, provide section to refers to the solution. This could be as high-level as deciding which type of app to build, or as specific as the type of interaction to develop for a particular interface.

These people should represent your assumed customer archetypes, derived from your initial interviews and other data.

The desirable outcome should be something that correlates to business success, such as sending a message or ordering an item. Keep in mind that it’s easy to come up with outcomes that look good, but don’t really tell you anything. These are called vanity metrics. For instance, if I want people to make a purchase on an ecommerce site, it’s not really that helpful to know how many people decided to follow us on Facebook. Instead, focus on identifying the pieces of information that help you make a decision and that give you a true indication of business success.

The actionable metric is whatever will tell you that your investment into building this item will be worth it. Actionable metrics can be a little tricky, especially early on, but I like to try to set these metrics as the barometers of the minimum amount of success you need to prove that the investment will be worthwhile. You can look at both perceived cost of investment and perceived value to gauge this.

Let’s say you work at an ecommerce company and you’re proposing a new feature that you hope will increase last-minute additions to a cart. You could ask the development team to estimate how much effort it would take to build out the feature, then work backward from that cost to see how much the average order size would have to increase to offset the costs.

If the team estimates something would take about 5 weeks and will cost $25,000, you’ll need the change to make at least that much money in that amount of time. So then let’s say you also know that the company usually has 1,000 sales a week and the average order size is $20. That means that right now, the company makes $20,000 a week. In order to offset the $25,000 estimated development dollars over 5 weeks, the change you make would have to bring in an extra $5,000 per week. This means that your average order size would have to go up $5 to $25. All the additional money earned after the offset is additional profit for the company.

That was all a lot of math, and you don’t always have that much information at your fingertips, especially when you’re very early on in the product development process. You might have to just make an educated guess about what sort of number would be “good enough.” The point is to try to pick a metric that will truly help inform you about whether or not you should invest in the new change or not.

Sometimes it’s easier to conceptualise this as a fail condition, or point at which it wouldn’t be worth moving forward. In other words, you can frame it as: “if we don’t make at least x% more on each order after, we won’t implement the full version of the feature.” Then you can work backwards to craft a testable hypothesis.

Of course, this framework can be adjusted as needed, but you need to clearly define the exact question you’re exploring and what success looks like. If you can’t come up with a clear hypothesis statement, go back and re-evaluate your assumption and narrow it down so you can run a successful experiment.

Design your experiment

Once you have a clear single question to answer and hypothesis, deciding what sort of experiment to run should be fairly straightforward.

Let’s revisit the meal planning application example. Say that you’ve decided your riskiest assumption is which of the two core problems is more compelling to users.

A hypothesis might look something like this:

If we build an app that automatically generates 5 recipe ideas per week,

Then busy parents,

Will be interested in downloading this application.

We’ll know this is true when we present them with a variety of food-related apps and they choose the recipe generation app at least 15 percentage points more often, for example, than any other choice.

Now you can focus on designing a way to test which apps a user would be most interested in using. There is no one exact way to do this. You could create fake landing pages for each potential solution and see how many people sign up for each fake product, or create ads for the different apps and see which one generates most actions. You should focus on finding the smallest thing your team can build in order to test your hypothesis – the minimally viable product.

In this case, a good MVP might be a mockup of a page with blurbs of a few different fake applications you haven’t built yet. Then you could use a click-tool like usabilityhub to ask participants to choose a single app to help them with meal planning and then monitor how many clicks each concept gets. This way, you don’t even need to launch live landing pages or ad campaigns, just create the page mock-up.

Frequently used lean experiment types/MVPs include:

  • Landing page tests
  • Smoke tests such as explainer video, shadow feature, or coming soon pages
  • Concierge tests
  • Wizard of Oz tests
  • Ad tests
  • Click tests

These are just a few suggestions, and there are many more experiments you can run depending on your context and what you’re trying to learn. Use these suggestions as starting places not step-by-step directions for figuring out the right experiment for your team.

Analysing your results

If you’ve set a clear and concise hypothesis and run a well-designed experiment, it should be clear to see if you’ve proved or disproved your hypothesis.

Looking at the meal planning app example again, let’s say you ran the click test with 1,000. You included 4 app concepts in the test, and hypothesised that concept A would be the most compelling.

If Concept A receives 702 clicks, Concept B receives 98 clicks, Concept C receives 119 clicks, and Concept D receives 81 clicks, it’s very obvious that you proved your hypothesis. You can persevere, or move forward with concept A, and then focus on to testing your next set of assumptions exploring that concept. Maybe now is the time to tackle an assumption about the app’s core feature set.

On the other hand, if Concept A receives 45 clicks, Concept B receives 262 clicks, Concept C receives 112 clicks, and Concept D receives 581 clicks, you obviously disproved your hypothesis. Concept A is clearly not the most compelling concept and you should pivot away from that idea.

In this case, you also have a clear indication of the direction of your pivot – choice D is a clear winner. You could set your new assumption that concept D is a compelling direction and run another experiment to verify this assumption, perhaps by running a similar test to compare it against just one other concept or by setting up a landing page test. Or you could do more customer interviews to find out why people found that concept so compelling.

But what if Concept A receives 351 clicks, Concept B receives 298 clicks, Concept C receives 227 clicks, and Concept D receives 124 clicks? There’s no clear winner or direction. Did you set up a bad test? Are none of your concepts compelling? Or all of them? What next?

The short answer is that you don’t know. But the great thing about lean experiments is that the system is designed such that your next step should be running more experiments. In failing to find a winning direction, you succeeded in learning that your original assumption was incorrect, and you didn’t need to invest much to figure that out. You now know that you need to pivot, you just may not be sure in which direction.

Which way to pivot?

If you know that you need to pivot but are unsure what direction to take, my first suggestion is to run another related experiment to verify your initial findings.

In the food example, you could try a similar test with just 3 options and see if the outcomes change, or try running landing pages for all 4 concepts. While you don’t want to be falsely optimistic, you also want to be sure that there wasn’t something about the way you ran your test or a fluke in the data that is giving you a false impression. Since lean experiments are intentionally quick and not robust, they can sometimes lack the rigour to give you true confidence. If you have a true finding, you should be able to replicate results with another test.

If you run another test and get similarly inconclusive data or truly have no idea what direction to go next after running an experiment, try stepping away from lean experimentation and go back to exploratory research methods.

A successful pivot can be any kind of change in business and product model, such as a complete reposition to a new product or service, a single feature becoming the focus of a product, a new target user group, a change in platform or channel, or a new kind of revenue or marketing model. A structured experiment is not going to teach you what direction to go, so you need to do some broader, qualitative data gathering.

I recommend running interviews with two subsets of people. First, talk to people who love your product/service and are most often taking the option that you want, such as purchasing frequently, and find out what they love about you and why. Then, if possible, talk to the people who are not taking desired actions, to try and find out why, or what they’re looking for instead. These types of interviews will be just like any other discovery interview, and you’ll be looking for the participants to guide you to new insights that can lead to your next set of assumptions to test.

Conclusion

Lean experiments are a great way to get any organisation learning from their customers and poised to make valuable changes. Getting used to the ins and outs of setting clear hypotheses and learning whether to pivot or persevere can take some time, but luckily those of us in UX already have the skill sets to do so successfully. Go forth and experiment!

The post Pivot or Persevere? Find Out Using Lean Experiments appeared first on UX Mastery.

]]>
https://uxmastery.com/pivot-or-persevere-find-out-using-lean-experiments/feed/ 0 54333
The Complete Guide to Gift-Giving this UXmas https://uxmastery.com/the-complete-guide-to-gift-giving-this-uxmas/ https://uxmastery.com/the-complete-guide-to-gift-giving-this-uxmas/#respond Mon, 05 Dec 2016 21:00:17 +0000 http://uxmastery.com/?p=49522 Ah, the holidays. A time to reflect, connect with friends and family, and agonise over the awkwardness of exchanging gifts with colleagues and clients.

Looking for gift ideas for fellow UXers, clients or colleagues? Read on…

The post The Complete Guide to Gift-Giving this UXmas appeared first on UX Mastery.

]]>
Ah, the holidays. A time to reflect, connect with friends and family, and agonise over the awkwardness of exchanging gifts with colleagues and clients.

Just like UX job descriptions, holiday gift-giving practices vary from country to country, region to region, and even office to office. Individual beliefs, cultural practices, and potential conflicts of interest aside, I usually find that a carefully chosen holiday gift helps the people you work with feel appreciated. I’ll always err on the side of buying a little something even if I’m not sure of the protocol.

Looking for gift ideas for fellow UXers, clients or colleagues? Read on…

For the fellow UXer in your life

Whiteboard sticky notes. Yah, you read that correctly. Two UX nerd tools in one. These little guys can stick anywhere with static cling and you can write and erase as many times as necessary. Er. Mah. Gerd!

Website Decks let you hold your website architecture in your hands (Source: UX Kits)
Website Decks let you hold your website architecture in your hands (Source: UX Kits)

Know a colleague in the midst of a big website redesign, or tackling information architecture challenges? A Website Deck is just what they need.

 A browser sketch pad, a precision-cut steel stencil and a pencil. Gift sorted. (Source: UI Stencils)

A browser sketch pad, a precision-cut steel stencil and a pencil. Gift sorted. (Source: UI Stencils)

Ever hear a fellow UXer complain that they wish they could draw better? Or that they’d like to sketch, but really, they can’t even draw straight lines? Problem solved with these UI stencils.

 The Pantone Artist and Writer's Notebook has a different colour chip on each page. Fancy! (Source: Amazon)

The Pantone Artist and Writer’s Notebook has a different colour chip on each page. Fancy! (Source: Amazon)

How about a little sketching inspiration? This Pantone notebook has unlined pages and colour chips to add excitement. These sketchpadshave browser outlines and grids for sketching and paper prototyping, and this Moleskin converts hand-drawn sketches into digital files. No. Excuses.

Y’all. The future is now – we even have 3D printing pens. Maybe this should be a gift for yourself. Or, ya know, your favourite UXmas contributor (hint, hint).

For the consultant or constant traveller, consider this foldable silicone keyboard. Bonus: it’s waterproof, so it’s immune to coffee spills.

Know an aspiring UX Unicorn? Or maybe need to transform your team into a set of magical generalists. You can’t teach them research and design and coding and everything else UX-related instantly, but they can get a taste of the unicorn life with a handy headband.

The Build-On Brick Mug. It's a coffee mug and construction set all in one! (Source: Thinkgeek)
The Build-On Brick Mug. It’s a coffee mug and construction set all in one! (Source: Thinkgeek)

Perhaps you know someone who is a proud specialist (a narwhal, if you’re familiar) and wants to show off their pride? In that case, I recommend this awesome Narwhal laptop sticker, pin, or cute mug.

For clients or colleagues who don’t realise how much they need UX

Move design conversations into the realm of facts and evidence with a set of UX Myths posters. (Source: Behance)
Move design conversations into the realm of facts and evidence with a set of UX Myths posters. (Source: Behance)

If there’s someone who you’d like to appreciate the full spectrum of user experience specialties/understand of the importance of research/could use a beautiful reminder that in fact, you are not your user; check out these posters of popular UX myths. You need to download and get them printed, so leave a little extra time – not for the last minute shopper.

The Brand Deck is a tool that helps you figure out who you are. (Source: branding.cards)
The Brand Deck is a tool that helps you figure out who you are. (Source: branding.cards)

Have a client who has a hard time describing their business goals in terms other than, “Make it pop?” Suggest the brand card exercise and give them these brand cards to get started. Order the NSFW deck at your own peril.

For a more general creative exercise, how about the oblique strategy cards? These cryptic cards were originally designed as a way to break musicians out of creative block. There are no real rules, other than that you are asked to view whatever you’re currently working on through the lens of the cards. It’s not for everyone, but can be a fun exercise with the right crowd.

Harness the power of checkboxes for effective communication with Knock Knock WTF Nifty Notes (Source: Amazon)
Harness the power of checkboxes for effective communication with Knock Knock WTF Nifty Notes (Source: Amazon)

And if, after all those fun gifts, you still have clients who argue that research is a waste of time, maybe just write a note to your office pal on these WTF notes.

For the product team you need to convince to work with you or other random colleagues, clients, and associates

Handcrafted, 3D printed cookie cutters let you read your food before eating it. (Source: Etsy)
Handcrafted, 3D printed cookie cutters let you read your food before eating it. (Source: Etsy)

Don’t know what to get? There’s always snacks and booze. Fruitcake and prepackaged snack boxes are cliched gifts for a reason, but make sure you personalise food and drinks so they don’t seem quite so cookie cutter. (Although, these font cookie cutters would be pretty rad if you know of someone who likes to bake)

Coffee lover? A coffee subscription or a coffee mug that doubles as a desk toy should do the trick.

If you’re really unsure, pick out a regional treat. I live in North Carolina, so my go-to gift is usually barbeque sauce. And if you’re looking at snacks, I’m a big fan of this small-batch popcorn, available in dozens of sweet and savoury flavours. Bonus: they’re from my tiny hometown.

In a pinch, you could always buy a bottle of quality wine or spirits. There are some exceptions, but most people appreciate a drink or two around the holidays, and if not they can easily regift.

I hope that helps ease the office holiday pain. Happy shopping!

Looking for more #UXmas cheer? We’re counting down the days to Christmas with a digital advent calendar. Join in the fun at uxmas.com or follow along on Twitter @merryuxmas for a daily UXmas gift.

The post The Complete Guide to Gift-Giving this UXmas appeared first on UX Mastery.

]]>
https://uxmastery.com/the-complete-guide-to-gift-giving-this-uxmas/feed/ 0 49522
Why people participate in UX research (and why the reasons matter) https://uxmastery.com/why-people-participate-in-ux-research/ https://uxmastery.com/why-people-participate-in-ux-research/#comments Mon, 31 Oct 2016 12:44:51 +0000 http://uxmastery.com/?p=48647 Finding and scheduling research participants is one of the biggest logistical challenges of UX research. Not to mention then getting those participants to fully engage in research activities. But what about the motivations behind why people take part. How does this affect research results? And what can you do about it?

The post Why people participate in UX research (and why the reasons matter) appeared first on UX Mastery.

]]>
Finding and scheduling suitable research participants is one of the biggest logistical challenges of UX research. Not to mention then getting those participants to fully engage in research activities. 

There have been many articles written about finding UX participants and ensuring they are at least representative of your users. But I’m yet to find much good discussion about the motivations for participants to take part in our research, and how that affects their participation and the research results. 

Understanding the underlying contexts, motivations, and biases when people enter a study helps plan and interpret results in the most neutral way possible.

There are many exceptions, but the most common ways to find UX research participants are to reach out to existing customers or leads, or use panels of UX tools like usertesting.com or recruiting companies. Even if you write a screener and recruit for a well-defined persona, each source results in different motivations, and can lead to varied responses to research activities.

Let’s look at each main recruiting source and some of the pros, cons, and things to be aware of while crafting your research plans.

Existing Users

People who already have a relationship with your brand can’t help but bring their preexisting impression of the company – whether positive or negative – to research sessions. Their overarching perception of your brand will sway their impressions of the product you’re investigating.

This is called the halo effect. If you generally like a brand, you’ll be primed to like everything about it. If you dislike the brand, you’ll be primed to think more negatively about every aspect you see.

Let’s say, for example, that you’ve always wanted a BMW, and hold the company in high regard. You get brought in to test a new navigation system and have trouble entering your address.

If you’ve always wanted to drive one of these, you might have trouble giving unibiased feedback.

Your first thought may not be that the system has a usability problem. Without even realising it, you might blame yourself, thinking you made a mistake, or write it off as part of being just a prototype.

The information in front of you doesn’t match your previous expectations (a phenomenon known as cognitive dissonance). So you assign the trouble elsewhere, downplay the importance of the issue, or focus your attention on the aspects of the experience that you like (what’s called confirmation bias). That means as a UX research participant, you’ve failed to give a lot of really important information without even knowing it.

A user’s experience with an overall brand also plays into their motivation to participate in a test. If a person frequently uses a product, they may have a vested interest in seeing the service improve and/or vouching for specific changes or improvements. If they like the product or have a good relationship with someone who works there, they may participate because they want to help out. On the other hand, if they’ve had negative experiences, they may look at a research session as a chance to vent or find an inside connection to get things changed.

Special note: If you work on enterprise tools and/or your users are internal, you’re likely to experience exaggerated effects of both the halo effect and confirmation bias, as well as battling politics and ulterior motives. You can’t avoid this, but it’s good to have a heads up.

Panel members

Participants who actively sign up for a research panel know they’ll be compensated for their time when they participate, and are more likely to view responding as a job.

The downside of panels is you don't know as much about them - including if they're just in it for the money. Photo by https://unsplash.com/@josemartinramirezcarrasco
The downside of panels is you don’t know as much about them – including if they’re just in it for the money. Image source

Many panels allow researchers to “rate” participants, so respondents know that if they give poor quality feedback, they could lose opportunities. The upside of this is that they are the most likely group to show up to sessions as scheduled and respond appropriately and consistently in longitudinal studies. Several studies have shown that monetary incentives increase participation rates.

The downside is that they may view their participation as only a job. They may not be invested in your product or may want to fudge their way into being seen as a representative user.

We’ve all heard of the professional user research participant, who will “ frequently supplement their income by participating in user research… and say and do whatever it takes to get into a study.” Writing effective screeners can help prevent some of those participants from partaking, but even the most qualified panel respondent is more likely to be motivated by money over altruism or intrinsic interest in the product.

So how can you make the most of your user research?

Now that we’ve looked at some of the issues, let’s take a look at the steps you can take to get the best possible engagement and data from research sessions. We have tools at our disposal, regardless of the source of our users.

Offer compensation (in a way that participants want to receive it)

Remember that participating in a study is essentially a social exchangePeople need to feel they at least come out even. Money, of course, is one of the easiest benefits to provide. 

Studies show that monetary incentives, including receiving a fixed amount of cash, being entered into a lottery for a prize, and charitable donations on a participant’s behalf, can make respondents more likely to participate in research. Besides the obvious benefit of getting paid, compensating participants shows you value their time and input.

Furthermore, giving participants an incentive of any kind can help spark the social construct around the reciprocity principle. Essentially, if you give something (anything) to someone, they will feel compelled to do something in return. This can be especially powerful, especially for longitudinal studies. Anecdotally, I’ve found I get the best response rates when I give about a third of an incentive after successful setup of a longitudinal study and the rest of the incentive upon completion.

Get creative with cash incentives – try a lottery or donation to a charity.

When choosing compensation, be aware that different types of monetary incentives will be most effective for different types of studies and different types of people. People who have strong inclinations toward self-direction, learning new things, or risk-taking respond better to lottery-type incentives than fixed amounts. People who value close social relations and betterment of the group over oneself prefer money given to a charity in their honour.

So think about the type of characteristics your target persona has and consider whether you can shift (or at least experiment with shifting!) the type of incentive you offer. Think carefully about offering a discount to your service as motivation. This can sway people too far and they might feel uncomfortable saying anything negative.

Also be mindful of the amount of incentive you provide. You want to provide an amount that demonstrates you appropriately value their time without breaking the budget. For instance, I’ve paid doctors much more to participate in a study than general e-commerce shoppers and typically pay participants of in-person or ethnographic studies much more than respondents to remote sessions.

Help participants see the importance of their feedback

To tip the social exchange cost/benefit ratio even more, give people context about why their help is useful and what you’ll do with the information. People like to know the feedback they give isn’t just going into a corporate vacuum, never to be seen again.

You can do this simply by introducing the topic at the beginning of a session – something as simple as, “we’re talking about x today because we’ve noticed some issues and would like to make improvements.” Though be careful, because there are times that it makes sense not to give too much away at the beginning of a session.

I’ve also found that people love hearing about changes we’ve made based on their feedback, especially with long term customers or internal users. It’s not always possible to share, but if you can, highlight specific study periods and lessons learned in release notes or even press releases. Participants appreciate it, and are more likely to take part again, or encourage others to do the same.

Create expectations through group labels

This last one is a bit tricky, but several studies show that people are more likely to adopt behaviours based on external labels if they are relatively positive. One study showed that when researchers labelled a random group of people as politically active, they were 15% more likely to vote, and several studies have shown that people tend to like to belong to groups or follow social norms.

My educated guess is that labelling people sets an expectation they’ll behave a certain way. If they don’t follow through, they start to experience the same kind of cognitive dissonance as when you find an issue with a product you love. You can subtly shift language to let people know you expect them to follow through – for example, tell them they’re in the group most likely to respond.

Switch it up when you can

When you know how people can be swayed based on the way you recruit, you can take steps to minimise bias in your results. As you can see, different sources of users and incentives vary the amount and quality of participation. When possible, try to use different types of recruiting methods and experiment with compensation to maximise your results.

What are some of the ways you reduce bias from people taking part in UX research? Let us know in the comments!

The post Why people participate in UX research (and why the reasons matter) appeared first on UX Mastery.

]]>
https://uxmastery.com/why-people-participate-in-ux-research/feed/ 1 48647
How to write participant screeners for better UX research results https://uxmastery.com/how-to-write-screeners-for-better-ux-research-results/ https://uxmastery.com/how-to-write-screeners-for-better-ux-research-results/#comments Wed, 14 Sep 2016 09:44:42 +0000 http://uxmastery.com/?p=44834 One of the best ways to guarantee quality results from your user experience research is to recruit the right kind of people for your studies. But finding the right participants? That can be a frustrating logistical challenge. Participant screeners are a vital step in UX research design so you can filter through potential recruits and find your target users.

Amanda Stockwell shares her best tips to write screeners so you only recruit users who will provide valuable insights for your product.

The post How to write participant screeners for better UX research results appeared first on UX Mastery.

]]>
One of the best ways to guarantee quality results from your user experience research is to recruit the right kind of people for your studies.

While most websites and software products should be easy enough for anyone to use, the best feedback comes from actual or representative users. Finding the right participants. however, can be a frustrating logistical challenge.

Creating effective participant screeners really helps smooth out the process.

What’s a screener and why do I need one?

A screener is just what it sounds like – a list of questions intended to identify your target users and weed out those who aren’t suitable for your study.

The first step in creating an effective screener is to take a step back and identify the types of people who will be using your product and who you want feedback from. If you already have personas, great! Pick your target persona and start there. If you don’t, here’s my invitation to go ahead and create them.

Depending on resources and the type of work you’re doing, you may choose to either:

  • Use a screener in real time and talk directly to people, or
  • Set up a remote survey tool with survey questions.

Either way, here are my best practice tips on how to write participant screeners and find the right people for your research.

Question focus

Even if you don’t have formal personas, take the time to identify the top few behaviours, contexts, motivations and attitudes of the type of people you are designing for and want feedback from. Focus your screener questions on those elements.

Use personas or identify the top behaviours, contexts, motivations and attitudes.

Let’s say, for instance, that you’re creating a travel-booking application. The way a parent books a family vacation is different from the way an assistant books last-minute business trips.

You may be interested in hearing from both types of users, but make sure you have enough of each type of user and know that context before you move forward. You may even want to ask them different questions or observe them in different contexts. Key questions here relate to who they book travel for and for what purpose.

Notice that I didn’t mention demographics. A person’s age range and gender won’t tell you how they use your travel app. In fact, the same person could act as different persona types in different contexts.

Take a 35-year-old mother. She could spend months researching perfect vacation hotels and screening for deals at home, but book the first direct flight she sees for a last-minute work meeting. Her priorities and behaviour could vary greatly depending on her context.

You may also consider whether someone is an existing or potential user, their technical skills, or if they have experience with a competitor. This behaviour may not be directly related to persona characteristics, but provides valuable additional insight for your research analysis.

Finally, add questions that eliminate any conflicts of interest, such as accidentally testing an employee of a specific company or ‘professional usability testers’ who attempt to join as many studies as possible for compensation.

Question order

Ask elimination questions at the very beginning to avoid wasting people’s time.

If you know that you’re targeting last-minute business travellers, put the question that identifies that user type at the very beginning. Then, order the questions from highest to lowest priority. Don’t worry about whether they ‘flow together’, unless there is a specific follow-up question to a previous answer. That way, you can be efficient and ask as few questions as possible to eliminate inappropriate candidates.

Keep in mind that the most effective screeners are the shortest. There is no ideal number of questions, as the granularity of necessary participants depends on the study goals. That being said, don’t use screeners to gather general information that you could easily ascertain during the research process. You’ll want to ask again to verify anyway, and longer screeners make it more likely that a potential participant will drop out.

Ask precise, easily-answered questions

The remaining best practices here are actually true for any survey questions, but I’m mentioning them because they’re especially important in screening participants.

Ask precise questions with clear answers. For instance, if you want to know how often someone uses an app, use numbers of hours instead of vague terms like, ‘sometimes’ or ‘frequently’.

Make the answers distinct so there’s no confusion. If you’re asking about age ranges, you could say that you are looking for 19 and under, 20-29, 30-39, and so on, rather than 0-20, 20-30, 30-40 etc. It seems simple, but the overlap can be very confusing and potentially lead you to incorrectly grouping people.

The non-confusing way to ask someone’s age – make sure there’s no overlap.

It’s also important to include ‘other’, ‘none of the above’, ‘I don’t know’ or ‘not applicable’ options. This way, candidates don’t feel compelled to pick the closest thing or something random and end up in your study when they should be disqualified or screened out.  

But don’t make it too easy

You never want to be leading or obvious about the kind of people you want to participate in your study.

By virtue of the fact that people are taking your screener, you know they want to take part in your study. And it’s human nature to try to conform to what you know someone is looking for. So don’t be too obvious about what you’re looking for in a participant.

Here’s an example. If you want to talk to someone who has purchased a phone in the last month, don’t ask them that! Instead, ask which of a long list of items they’ve purchased in the last month, with ‘phone’ as just one of many multiple choice options. That way, they won’t know which option you’re looking for.

You can also ask multiple-elimination and acceptance questions to obscure your desired answers. For instance, if you want to talk to people who spend more than 20 hours per week on something, the instinct is usually to create ranges with ’21 or more’ as the only answer that moves forward.

However, if you break up the options to something like: 0-5, 6-10, 11-15, 16-20, 21-25, 26-30, 31-35 and 36 or more, you have 4 answer options that get eliminated and 4 that move forward.

Multiple eliminations and acceptances make it harder to tell which is the ‘desired’ answer. I also avoid yes/no questions as much as possible, because it’s easier to obscure your true purpose with open-ended or multiple choice questions.

If possible, I like to add at least one open-ended question to a screener to see how a participant responds. Shy, quiet types are often the most uncomfortable in research settings and the hardest to glean insights from, so I like to look for someone who is willing to express their opinions, whether written or verbal.

Remember: this is your first impression

Finally, remember that a screener is often the first interaction a user has with the research process at your company or client, and you want to ensure it’s a positive one. 

Provide a warm, open introduction setting the context of the screener and the study. You don’t want people to assume that they’ll get to participate in a study just for being screened, but you do want them to feel comfortable answering questions and motivated to move through.

Provide context and clear expectations about next steps if they pass the screening: how does scheduling work, how long will they need to spend and where do the sessions take place, compensation, etc. You’ll never be able to answer every question upfront or please everyone, but you can ensure that the highest number of candidates actually show up to your sessions if you give them information upfront.

There are many moving parts to consider when putting together a UX study to ensure it’s successful. Writing a screener is one of the first, vital steps. It takes practice to make clear, efficient screeners, but it’s necessary so you can be sure your target users are represented in your studies.

The post How to write participant screeners for better UX research results appeared first on UX Mastery.

]]>
https://uxmastery.com/how-to-write-screeners-for-better-ux-research-results/feed/ 2 44834
How to recruit users for UX research in an agile sprint https://uxmastery.com/recruits-users-ux-research-agile-sprint/ https://uxmastery.com/recruits-users-ux-research-agile-sprint/#respond Thu, 14 Jul 2016 00:30:14 +0000 http://uxmastery.com/?p=42860 Finding users for testing in a short sprint can be a daunting task. But just because you don’t have much time, doesn’t mean you have to skip the research. Amanda Stockwell explains how you can quickly find recruits.

The post How to recruit users for UX research in an agile sprint appeared first on UX Mastery.

]]>
One of the biggest challenges for companies incorporating UX practices into an Agile development process is the logistics of research. As an Agile approach works in short “sprints” spanning weeks, it can be a challenge finding and scheduling participants for research in such a short time. (If you’re not sure what Agile is all about, make sure you read my previous post on what Agile means for UX)

For those of us who come from a traditional usability background, the idea of finding participants, scheduling sessions, performing research, and analyzing data in a single sprint can be daunting to say the least.

In truth, there are many methods for recruiting participants for that can fit nearly any timeframe or budget.

Know who you need to speak to

First things first, remember that you need to begin the participant-finding process by defining what kind of users you need. Think about your target users and the 5 or so most important identifying elements, such as software usage, technical skill set, job responsibilities, and so on.

personas
Think about the 5 or so most important identifying elements of your target users.

Even if you don’t have formal personas (although I recommend them!) you must define a general set of criteria so that you’re most accurately validating (or invalidating!) your hypotheses. Who cares if tech-savvy dog lovers can’t use your product if your target is traditional cat fans? You also need to create screening questions that successfully eliminate anyone not in your target audience.  

Consider the type of research you’re going to do. If you need to do the research in-person, you’ll have to focus heavily on geography and allot slightly more time for scheduling. However, many ux research methods now have remote options, and you can conduct research with users anywhere in the world.

Reach out to existing users

The next thing you need to consider in finding participants is what stage your product is in, and whether or not you have any existing users. If you’re working on an existing product and you already have users, invest a small amount of upfront time to create a go-to panel of research participants. Building a panel means you’ll always have a list of people who have expressed interest in being part of research. In my experience, I’ve been able to fill research sessions, get survey results, or whatever else I needed within a few hours of sending a panel invite.

Your panel can be as simple as a spreadsheet with a list of names and contact information, and you can add people in various ways: a signup form on your website or in an email, asking your sales and support teams to pass on contacts, promotion via social media channels. Consider asking potential panelists to answer a few targeted screening questions as they sign up so that you can quickly search for the type of user you need. Just be sure not to spam people with requests and give them a way to opt-out if they’re not interested.

The downside? These participants may be biased based on their current experience with your brand, whether they’re more engaged and positive than normal or have had a very bad experience, so keep that in mind.

You can also intercept real customers who are actively using or have just used your product. If you happen to work on a website or app, you can use tools like Ethn.io to capture live users and invite them to an immediate study. If they say yes, you could also invite them to be a part of your ongoing panel.

No users yet? Get creative

If you’re building something new that has no user base, recruiting participants can be a bit harder, but there are some tried and true methods. The first is simple; go where your target users are and get bold. Looking to talk to students? Hang out in a coffee shop at a local campus with a sign offering to buy coffee in exchange for some time.

Are your target users at your local cafe?

The next method to recruit participants is a compilation of many different resources and can be helpful in any recruiting effort. Use the power of social media, whether that’s tapping into a Facebook group, searching secondary contacts on LinkedIn, or targeting a hashtag on Twitter.

Post a small intro to your study and a link to a screener survey and watch the respondents trickle in. After that, you can manually monitor the participants to invite them or automatically send those who qualify to an appointment scheduling app such as pow wow.

Finally, some tools already have recruits lined up. As long as you provide screening criteria, you can get access to the sorts of participants you need. For instance, services like YouEye, dscout and usertesting.com have huge lists and you can typically get results from tests in less than a day.

Don’t skip your research!

Regardless of what kind of research you’re doing, who your participants are, and what stage your product is in, there are ways to recruit participants quickly so that you can incorporate UX research into your Agile process. Don’t let speed be an excuse to skip research or skimp on finding the right kind of participants!

If you want to know more about how to recruit participants, I recommend “Validating Product Ideas Through Lean User Research” by Tomer Sharon. Tomer provides a great overview of more techniques to find participants and some detailed descriptions of using social media to recruit.

Catch up on some of our recent posts on Agile

The post How to recruit users for UX research in an agile sprint appeared first on UX Mastery.

]]>
https://uxmastery.com/recruits-users-ux-research-agile-sprint/feed/ 0 42860
How to adapt UX research for an Agile environment https://uxmastery.com/how-to-adapt-ux-research-for-an-agile-environment/ https://uxmastery.com/how-to-adapt-ux-research-for-an-agile-environment/#respond Tue, 21 Jun 2016 05:10:25 +0000 http://uxmastery.com/?p=42587 Working as part of an Agile team means you don’t always get the time you’d like to carry out your research. But Amanda Stockwell shows how, with a little flexibility, high-quality UX research in an Agile environment is possible. This is the first in a series of posts that will discuss the impact of Agile software development on UX practices.

The post How to adapt UX research for an Agile environment appeared first on UX Mastery.

]]>
If you’re thinking that Agile development has almost completely taken over software development, you’d be correct. In fact, according to one 2015 survey, only 2% of companies still operate using purely traditional Waterfall practices. In short, Agile is everywhere.

While there are many benefits to Agile, it’s meant that those of us in the user experience field have had to examine and adapt our practices to stay in tune. While we’ll no doubt continue to face working in this new world, a little creativity and flexibility can help keep your UX research on track.

What is Agile, anyway?

Before we get too much further, let’s first be clear about what we mean when we talk about “Agile.” Agile is an approach to developing software. Agile practices vary from company to company and even team to team, but there is a shared set of values and principles that prioritises rapid, contiguous release of live code, collaboration across cross-functional teams and users, and a commitment to responding to changes.

Agile teams are cross-functional, and aim to create working code in very short cycles called sprints. There are no distinct stages for discovery research and requirements definition, design, development, or testing, like more traditional development methods, often called Waterfall.

For those of us in user experience, this means we no longer have dedicated time to thoroughly explore users or perform extensive discovery research, and we’ll be figuring out the specifics of the design just in time for development to begin. That’s a little scary for some of us, but it also means we get to respond to the rapidly changing needs of the team and users, constantly uncovering and acting on opportunities to improve the experience.

What about Lean?

Lean is a set of business principles derived from the Lean manufacturing system.The principles are centered around increasing efficiency and value, removing waste and designing the system and teams in place to maximize those efforts. While the principles are similar to those in Agile, Lean is more focused on the holistic business and can be used in any industry, whereas Agile is specifically focused on software development.

Matt’s sketchnote of Jeff Gothelf’s UX Australia 2013 talk ‘Better Product Definition Through Design Thinking and Lean UX’

One more piece of jargon, just for fun

To make things even more confusing, there is also a specific new product development approach called the Lean Startup methodology. The Lean Startup is centered around iterative experimentation cycles and using the learnings from each to inform product development decisions.

Lean, Lean Startup and Agile are not mutually exclusive. Often, companies will embrace the ideas of including constant validated learning from the Lean Startup, approaches to reduce waste from Lean, and use the Agile development approach to organise their teams and work structure.

While there are certainly well-documented challenges in incorporating UX successfully into Agile practices, there are also tremendous benefits. Read on to hear about some of the specific considerations of adapting UX practices to Agile.

Determining research methods for Agile

There are many existing resources on the best ways to determine which UX research method you should employ to best answer your open questions. I especially like Christian Rohrer’s summary (outlined in the table below), which lays out a 3-dimensional decision-making framework and provides notes about the position in the product development cycle.

Product Development Phase
Strategize Execute Assess
Goal Inspire, explore and choose new directions and opportunities Inform and optimize designs in order to reduce risk and improve usability Measure product performance against itself or its competition
Approach Qualitative and Quantitative Mainly Qualitative (formative) Mainly Quantitative (summative)
Typical methods Field studies, diary studies, surveys, data mining, or analytics Card sorting, field studies, participatory design, paper prototype, and usability studies, desirability studies, customer emails Usability benchmarking, online assessments, surveys, A/B

Christian Rohrer’s summary of a 3-dimensional decision-making framework.

While his suggestions are spot on, when working in an Agile setting, there is no longer dedicated time to focus on research, and you can’t always spend time on the lengthier methods. Instead, you have to work research into the relatively short sprint cycles – sometimes as short as 2 weeks.

The considerations for choosing research methods in an Agile environment remain the same. You still need to:

  • Narrow down a specific question to answer and hypothesise about
  • Determine whether you’re looking for trends or reasons
  • Consider the most appropriate context for your research, and
  • Think through whether you need to look at behaviors or attitudes.

However, due to the limited timeframe in Agile, you often need to make a few tweaks to successfully integrate your research. Tactics like breaking down research questions into the smallest possible hypotheses and being willing to flex the rules of traditional research methods are a huge help in keeping up high quality UX in an Agile environment.

Smaller hypotheses

For instance, let’s say that you’re working on a new version of an editor for an existing online blogging platform, and you want to make sure that it’s easier to use than the original version.

In traditional waterfall development, you’d fully flesh out a new and improved version of the editor and it’s many features, then set forth to test the hypothesis that your new design will perform better than the last. You’d probably create a high-fidelity prototype and do a series of competitive usability tests comparing the two experiences, making sure to include several rounds with each target persona. The whole thing could take months.

In an Agile environment, you’d approach things quite differently. Let’s start with the hypothesis that the new design will “perform better;” you could easily break that into several hypotheses, each centered around specific features or user groups.

For instance, you might write one hypothesis that goes something like, “If we implement drag and drop formatting, our non-technical users will find it 30% easier to lay out their blogs” and a follow up that goes something like, “If we provide image editing, users will value our service more and include at least 10% more imagery in their blogs.” You’d then work to prototype and test the first element, drag and drop, before you start working on the image editing.

In addition to focusing on investigating smaller components, you may need to get creative about the methods you use to ensure that you make the most of your research time and maximize results.

Franken-methods and flexibility

In addition to focusing on investigating smaller components, you may need to get creative about the methods you use to ensure that you make the most of your research time and maximize results.

For instance, maybe you have a few hypotheses about different elements of the product that are in different phases. You want to test the specific usability of an already-prototyped interaction, and also need to follow up to understand why an existing feature isn’t being used much. You’d typically want to run a usability test to answer the first question and interviews to answer the second. But instead, you could add a few interview questions at the end of the usability test. Because the sessions are so narrowly focused on the specific interaction, you should have plenty of time to mesh the two.

I’ve also found that as we need to push to faster and faster timelines, we’ve been using more remote and unmoderated methods. There are well-documented cons to not being able to be face-to-face with a user, but if your choices are to skip interviews or do them with video-conferencing, definitely choose the latter.

There are many other ways to flex methods that take some getting used to for those of us who come from rigorous research backgrounds. For instance – maybe you test a specific interaction’s usability with colleagues who aren’t familiar with the project instead of recruiting outside users or don’t worry whether you’re going to have statistically relevant results for a survey response. Maybe instead of writing up a thorough findings report, have a short debrief meeting where everyone shares their key takeaways and document just those.

One trick that I find endlessly useful is to go ahead and proactively schedule regular research sessions with users, especially if you can build up a panel of willing participants. The logistics around scheduling participants can be time-consuming, so setting up an ongoing process will streamline the process and remove the excuse that you won’t have time to find and schedule participants. With so many things in a state of perpetual flux, I guarantee you’ll always have something to investigate.

Remember, the core tenets of successful UX research remain the same regardless of the other considerations in your specific environment. Being creative and flexible about your methods doesn’t let you off the quality hook, but it does allow you to gain valuable insights in a way that makes most sense for the collective team.  

The post How to adapt UX research for an Agile environment appeared first on UX Mastery.

]]>
https://uxmastery.com/how-to-adapt-ux-research-for-an-agile-environment/feed/ 0 42587