Chris Gray – UX Mastery https://uxmastery.com The online learning community for human-centred designers Thu, 10 Sep 2020 07:17:00 +0000 en-AU hourly 1 https://wordpress.org/?v=6.2.2 https://uxmastery.com/wp-content/uploads/2019/12/cropped-uxmastery_logotype_135deg-100x100.png Chris Gray – UX Mastery https://uxmastery.com 32 32 170411715 Kris Kringle: How Well do you Know your Colleagues? https://uxmastery.com/kris-kringle-well-know-colleagues/ https://uxmastery.com/kris-kringle-well-know-colleagues/#comments Tue, 15 Dec 2015 03:31:58 +0000 http://uxmastery.com/?p=33657 Chris takes an entertaining look at just how badly wrong the office Kris Kringle can go, and uses it as a reminder that we can't deliver truly great experiences if we don't know the customer.

The post Kris Kringle: How Well do you Know your Colleagues? appeared first on UX Mastery.

]]>
It’s that time of year again. The Christmas decorations are in the shops, the diaries are filling up with functions and you’re being cajoled into the office Kris Kringle.

Taking a poll around the office, you won’t be surprised to hear that this well-intentioned Christmas tradition is fraught with angst! The very thought of giving and receiving a cheap gift from someone you don’t know particularly well is enough to turn even the most festive individual into a grumpy old scrooge.

We asked around for stories of the worst or weirdest Kris Kringle gifts ever. There’s the usual coffee mug, soap, or something totally inappropriate like the X rated stubby holder.

Then there is the down right weird.

  • “An electric blue plastic bird on a clip”
  • “A garden gnome and I don’t even have a garden”
  • “A t-shirt with Santa riding on a Harley with the motto “you better watch out” underneath”
  • “A 1993 calendar […and it wasn’t 1993]”
  • “A light for a bike and I don’t have a bike”
  • “A can of spaghetti”

You can’t blame the gift giver – how on earth can they give a great KK pressie when they hardly know the person that they are buying for? Think about when you’re on the giving end – that sense of complete confusion about what to buy for that colleague you barely have anything to do with.

  • Wine? (Turns out they have been sober for 8 months…….)
  • A car magazine? (Apparently cars aren’t their thing…….)
  • A new-fangled egg beater? (They are allergic to eggs……)

Even the things you think they might like could be totally off the mark.   

The best presents by far are from those that know you well and can pick the perfect thing for you – something that you like/need/want. 

As we head into the festive season, the KK disaster is a timely reminder that when making products for our customers, we must make sure we know that customer well, so that we can make a product that solves genuine problems – a product which is useful and adds value to someone’s life.  Contextual enquiries, surveys, workshops and usability tests are great methods for getting to know your customers.

So, if you are a part of an office Kris Kringle this year, use it as an opportunity to conduct a little research on your colleagues to find a gift that is right for them.

But keep your expectations low and be prepared to receive an Xmas sweater with inappropriate reindeer on it.

Bad Xmas sweaters

The post Kris Kringle: How Well do you Know your Colleagues? appeared first on UX Mastery.

]]>
https://uxmastery.com/kris-kringle-well-know-colleagues/feed/ 3 33657
How to Run an Unmoderated Remote Usability Test (URUT) https://uxmastery.com/how-to-run-an-unmoderated-remote-usability-test-urut/ https://uxmastery.com/how-to-run-an-unmoderated-remote-usability-test-urut/#comments Tue, 06 Oct 2015 21:14:48 +0000 http://uxmastery.com/?p=29600 We published this article in which Chris Gray explains how unmoderated remote usability testing (URUT) a while back, but we're so excited to be including a new animated video that we've decided to republish it.

Enjoy!

The post How to Run an Unmoderated Remote Usability Test (URUT) appeared first on UX Mastery.

]]>
As UXers we practice in exciting times. 

Design is in demand, and the tech sector is at the forefront of business innovation. It is also a time where we have access to a huge number of tools and techniques that enable us to innovate and adapt our practice for a broad range of scenarios.

Usability testing is a cornerstone of UX practice. Perfect for evaluating the designs we create, flexible for collecting a range of information about customers and easy to combine with other techniques. Usability testing is a technique where representative participants undertake tasks on an interface or product. The tasks typically reflect the most common and important activities and participant’s behavior is observed to identify any issues that inhibit task completion.

Usability testing is a super flexible technique that allows for the assessment of a variety of aspects of an interface including the broad product concept, interaction design, visual design, content, labels, calls-to-action, search and information architecture. It is a proven technique for evaluating products, and in some organisations is used as a pre-launch requirement.

  • It’s relatively time consuming; lab based study is typical completed with between 5 and 12 participants. Assuming each session takes 1 hour, with one facilitator running the sessions this would take between 1 and 3 days.
  • Recruiting participants to attend the sessions takes time and effort; via a recruitment agency it would take minimum a week to locate people for a round of testing.
  • Due to the time-intensive nature and cost of in-person usability testing, most studies are conducted with relatively small samples (i.e. less than 10). While a small sample is often adequate for exploring usability and iterating a product, some stakeholders have less confidence in these small sample sizes. This is often due to exposure to quantitative market research where samples in excess of 500 people are common.
  • They are conducted in an artificial environment. In person tests are often lab-based or in a corporate setting that may not reflect real world use of the product.

One of the ways these downsides can be overcome is the use of unmoderated remote usability test (URUT).

Let’s take a look at some of the basics of running URUTs.

What is URUT?

URUT is a technique that evaluates the usability of an interface or product; that is, the ease of use, efficiency and satisfaction customers have with the interface. It is similar to in-person usability testing however participants complete tasks in their own environment without a facilitator present. The tasks are pre-determined and are presented to the participant via an online testing platform.

There are two broad methods for URUT with varying ways for collecting participant behaviour and these are dictated by the technology platforms.

URUT utilising video recordings of participants interacting with interfaces. These studies are more qualitative in nature with participants thinking aloud during the recording to provide insight.

URUT where the behavior is captured via click-stream data and is run more like a survey. These studies are more quantitative in nature because larger sample sizes are practical and the systems automate tracking of user behaviour.

Both methods are designed to evaluate the usability of a product and both have strengths and weaknesses. Video based sessions require more time to identify the findings and lend themselves to smaller samples however by listening to participants and observing their behavior more information can be collected regarding the design. Click stream methods allow for larger sample sizes and tend to be faster to compete due to the automation of data collection.

Note that some tools support both methods; click stream for large samples and video is collected for a subset of the sample to be able to explore specific aspects of the design in more detail. More on the tools below.

When to use URUT?

Common scenarios where URUT is value include:

  • Obtaining a large sample and/or a high degree of confidence is required: A small sample of in person usability tests may be all that is required from a design perspective but if your stakeholders are used to seeing large samples and buy-in with a small sample is difficult then using big numbers may be simpler than trying to convince them of the value of the small sample. Further, where a new design is critical for an organisation or will have will have a substantial impact, the confidence gained from a large sample study can be valuable.
  • Where the audience is geographically dispersed or hard to access: The audience for some products are geographically spread and can be hard to access without travelling great distance, imagine a health case management system for remote communities in the Kimberly. Also consider trying to access time poor senior executives, they may be able to complete a 15 minute online study late at night in a time convenient for them but not during the day or in a specific location.
  • Where speed is critical: Everyone working in the digital industry will have worked on a project with tight timelines or is running behind schedule. Also, in today’s Agile workplaces, getting usability testing conducted quickly may be the only option. An URUT study can be run in entirety in a couple of days whereas a typical in-person study would take more than one week, if not longer.
  • Where a specific environment is critical: Some products will be used in environments, which cannot be replicated in a lab or where their context of use is critical. For example, an app used outdoors in snow bound towns.
  • Where budgets are tight: Running 6 usability testing with a video recording technique especially where the sample is fairly generic, can be inexpensive.
  • In cases where you need to compare 2 or more products or interfaces: URUT is perfect for benchmarking studies comparing either competitor products or different iterations of your product. The ability to capture large sample sizes means that statistically significant differences between interfaces can be identified.

URUT tends to be less appropriate for more exploratory style usability testing because it is not possible to change tasks mid stream or ask impromptu questions. Click-stream tools tend to provide lots of data on what is happening however tend to provide less insight on why the behavior is occurring. Video based studies can be frustrating when there is a core questions that you would love to ask but hadn’t planned for. For early stage low fidelity prototypes in-person usability testing tends to be preferable because the facilitator can provide more context for participants regarding the intended functionality of the interface.

How to run an URUT

Before you start testing: You need to fully understand why the research is being conducted. Like all UX research techniques this comes back to defining the objectives of the study. All good research requires a clear understanding of:

  1. The objectives of the project.
  2. Identification of the research questions, which spell out how we will explore the objective.
Research ObjectivesResearch Questions
Evaluate the effectiveness of the booking processDo participants understand the field labels?
Do error messages support participants to progress?

Exploring these objectives and research questions with stakeholders at the outset will help with designing the study and provide a reference point for subsequent discussions. Spending the time up front to get this right will save time down the track and help ensure a successful study.

Audience

In order to run an URUT is important to identify who will complete the study. Ideally the sample would be representative of the product audience. There are a number of options for sourcing participants:

  1. Emailing the study to a database of existing customers. This assumes that you have customers.
  2. An intercept can be run on a website with existing customers. That is, a pop-up on your site invites people to participate in the study. An advantage of this approach is that the sample is likely to be representative.
  3. A panel is another option, especially when you don’t have an existing customer base. A panel is a database of people who have indicated that they would like to participate in research. Usually panel databases can be segmented to target a specific audience however you typically pay for the convenience. Some URUT tools have an integrated participant filtering which can be used to improve the representativeness of the sample.
  4. Social media can be another means to locate sample especially for organisations who have an engaged following. With social media it is important to ensure that the sample is representative of your audience.

Offering some form of incentive may be required to motivate participants to compete the study such as gift voucher prize. Audiences that are more engaged with the organization tend to require smaller incentives and those that are less engaged a greater incentive.

Tasks

It is crucial to get the tasks right for URUT. It needs to be very clear to the participants what is required of them. Provide enough detail for the participant to compete the task on their own and try to include any information they would require to complete the task. For example if a task requires credit card details providing fictitious card details will be necessary.

Avoid adding extraneous information in a task, which may confuse participants. Also avoid clues and telling the participant what to do, for example avoid including the wording of a call-to-action in the task, which will give the task away.

And finally, ensure that the interface supports participants to actually complete the task and for them to be aware that they have done so. In a prototype this may require adding specific content. An example task: Imagine you have decided to stay in Cairns for the first week in September. Use this site to reserve accommodation and pay.

Include questions

It is recommended that survey questions be provided as part of a study.

  • Include closed questions after each individual task to measure ease of task completion. This will provide insight on which tasks are harder to complete than others. Also including open-ended questions will allow participants to describe their experience and any issues they encounter.
  • Questions can also be provided after the test as a whole, to allow an overall assessment of the experience. This could include metrics such as customer satisfaction with the product, Net Promoter Score and System Usability Scale, which can be used to benchmark the product over time and against competitors. Again open-ended questions should be used to allow participants to provide feedback and to understand why issues are occurring.
  • Questions can also be included with the intention of profiling participants. These can be helpful to understand the audience and/or to check that the sample matches a known audience.
  • Finally, questions can be used to understand whether participants have understood a task. This can be especially valuable on content sites. For example if you were testing the Australian Tax Office website, the task could be to find the tax rate for a given salary and then follow up with a question to ask what the rate is.

Test assets

What are you actually testing and how will the URUT tool and participants access the interface? Consider how you are going to set-up the URUT tool and the prototype or interface being tested. The responsiveness of the interface you are testing can impact participant’s experience of the product. It is important to make sure that the participant doesn’t need any set-up from their end; barriers to people completing the study will reduce the completion rates. Try to ensure that the interface can be accessed from any computer or device the participant may be using.

Piloting

Testing the study with either a subset of participants or in a preview mode will allow issues with the prototype, technology, tasks or questions to be ironed out. Piloting the study will protect against wasting sample you are paying for or using up a small limited sample.

Tools

There are a number of different tools out there and more coming onto the market all the time. It is recommend that before running a study you explore some of the different options out there. Tools that support video recordings of participants include:

Tools that track click stream data include:

A tool like User Zoom collects both video and click-stream data.

Field-work

While the survey is being conducted it is important to monitor the data and be available for offering help to participants. Monitoring the data will ensure you see everything is working as planned and that you are receiving the data you need to meet your study objectives. Being available via email or phone helps manage the relationship with customers and to provide help where it is required.

Analysis

Once you have collected your results it is analysis time. To begin with look at some overarching metrics such as overall task completion and customer satisfaction. These can be automatically calculated in tools that measure click-stream like UserZoom. This will provide an overall feel for the effectiveness of the product. For video based tools you will need to watch the sessions and note whether the participants have been able to complete each of the tasks.

With an overall feel for the product look into the individual tasks and identify those that are causing issues. Next you need to find out why. With video based tools, watch video of specific tasks to observe behavior to identify the elements of the interface that are causing the issues. For clickstream services focus in on a combination of the pages visited during the task to identify behavior during the tasks and where the issues have occurred (i.e. which screens). Also view open-ended feedback.

Tips for running URUT

Choose the testing platform after you have identified the objectives of the study. It is crucial to select a tool that is fit for purpose and will support your study objectives. Some platforms do not support specific technologies such as flash and have limitations in the way they measure user behavior. As an example I worked on a study recently that was evaluating a single page app. In order to be able to measure user interaction we needed to get our developers to insert additional code to measure some interaction because the tool tracked the URL which did not change when users navigated a variety of content.

Set clear expectations for participants. Obtaining useful data is dependent on participants understanding what is expected of them. Setting clear expectations up front (during recruitment and at the start of the survey) about what participants are required to do and why the study is being conducted will help ensure success.

Remember that participants won’t receive any assistance during the study. It is crucial to ensure that tasks are clear, user friendly and that help is available. Consider how much assistance is available within the URUT tools for participants during the study.

Avoid bias. While all bias cannot be avoided, it is important to remove as much as possible. Randomise the order of tasks, which means that learning the interface during study will not influence performance on latter tasks.  Task wording can also introduce bias. As discussed, pay attention to task wording to ensure that they effectively test the product.

Keep participants engaged: Avoid participants quitting your study. Participants are more likely to complete the study if they feel like their feedback is valuable, if the tasks are interesting and the study isn’t too long.

Case study

A large corporate was about to implement a significant change to their site. Multiple rounds of in-person usability testing had been conducted and indicated that the new design would be a success. Due to the scale of the change the organisation wanted a high degree of confidence that the new design would enhance the experience. We ran a study which involved benchmarking the task completion rates, perceived ease of use and advocacy on the live site. We then repeated these on a prototype of the new design. By utilizing larger sample sizes, we had tight confidence intervals on core metrics that provided an accurate picture of the performance of the new design in comparison to the old.

Wrap-up

URUT is a technique that can offer quick, inexpensive and robust usability testing. Of particular value can be the ability to use the technique for benchmarking and context-sensitive studies. It is a great tool to have in your bag of research techniques and can be a great compliment to in-person methods. Exploring the different tools on offer and experimenting with the technique is the best way to learn and develop expertise.

Make it clear what is expected of participants, keep your research objectives in mind, and avoid bias. Good luck!!

The post How to Run an Unmoderated Remote Usability Test (URUT) appeared first on UX Mastery.

]]>
https://uxmastery.com/how-to-run-an-unmoderated-remote-usability-test-urut/feed/ 3 29600
Better User Research Through Surveys https://uxmastery.com/better-user-research-through-surveys/ https://uxmastery.com/better-user-research-through-surveys/#comments Wed, 19 Nov 2014 23:11:00 +0000 http://uxmastery.com/?p=5993 Creating a great survey is like designing a great user experience—they become a waste of time and money if the audience is not at the centre of the process.

Chris Gray shows us in this whiteboard animation how to build the kind of survey that will collect the most valuable information from our users.

The post Better User Research Through Surveys appeared first on UX Mastery.

]]>
Online surveys are commonly used by marketers, product managers, strategists and others to gather feedback. You’ve probably participated in some of these surveys and I’m sure you’ve noticed that they’re often executed poorly.

Surveys are increasingly becoming a more accepted tool for UX practitioners. Creating a great survey is like designing a great user experience—they are a waste of time and money if the audience, or user, is not at the centre of the process. Designing for your user leads to the gathering of more useful and reliable information.

Let’s take a look at some of the basics of creating and running a useful online survey.

What is a survey?

A survey is a simple tool for gathering information. Surveys typically consist of a set of questions used to assess a participant’s preferences, attitudes, characteristics and opinions on a given topic. As a research method, surveys allow us to count or quantify concepts—a sample or subset of the broader audience is used, the learnings from which can be applied to a broader population.

For example, we might have 100,000 unique users of a website in a given year. If we collect information from 2,000 of those users, we could confidently apply the information to the full 100,000.

When it comes to the digital space, we can use surveys for a variety of purposes including:

  • Gathering feedback on a live product or during a pilot;
  • Exploring the reasons people visit a website and assessing their experience of that visit (such as a True Intent survey);
  • Quantifying results from qualitative research activities such as contextual enquiry or interviews; and
  • Evaluating usability, such as the System Usability Scale.

Surveys can be an effective method of identifying:

  • Who your users are;
  • What your users want;
  • What they purchase;
  • Where they shop;
  • What they own; and
  • What they think of your brand or product.

Benefits

Surveys can benefit and inform the design process by:

  • Providing information to better understand end users to design better products;
  • Mitigating risk of designing the wrong, or a poor, solution for users;
  • Providing stakeholders with confidence that a design is, or will be effective. Gathering larger sample sizes, in comparison to qualitative research, often speaks the language of business stakeholders. Whether we like it or not, there is often a perception when it comes to research that more is more.

Before starting

Like with any UX research activity an effective survey must start with a clear understanding of the needs and information required from the project. To create an effective survey both the business context and project objectives must be clearly understood. The business context of the interface or the product includes insight into why it exists and how it supports the business objectives. The project objectives include the reason the survey is being conducted. For example, is the survey being run to understand the end user, inform the direction of a design or assess a live website? The project objectives may inform the type of survey, the collection method and the robustness of the evidence required, which in turn could influence the ideal approach.

Furthermore, a set of research questions should be defined around the information that needs to be collected. These research questions can then be used as a framework for ensuring that the required information is collected effectively. Defining the information  can also be a mechanism for avoiding any irrelevant questions that could creep into the activity.

The information required also provides a framework for the scope of the research.  As a start point to any project, the information to be collected needs to be agreed to by all parties. Without this information the research becomes an exercise in guesswork and is likely to miss the mark for stakeholders and be frustrating for all.

Creating an effective survey

Effective questions and good survey design are important for generating quality data and maximising completion rates. Poor questions result in poor feedback that cannot be relied upon. Dropout is the enemy of achieving a robust sample. It is a win in the first place to get someone to agree to participate in an online survey. It is unforgivable if they drop out because they are bored or frustrated.

The following is a guide to creating an effective and engaging survey:

  • Logical flow of questions. In order to make the questions easier and faster to answer they should be grouped with like questions and ordered in a logical manner. Imagine answering questions about your attitude to boat refugees, then being asked about your experience of your favorite fast food chain. The transition can be jarring. Obviously there is a need to be able to change topics, however minimising any unnecessary shifts, particularly at inappropriate times, will result in a more effective survey.
  • Questions need to be easy to understand. Many surveys will be completed without the aid of anyone to clarify any confusion. It is important to make sure that questions can be readily understood without any additional information. Of major concern is that ambiguous or difficult to understand questions can be answered incorrectly which can bring the data into question.
  • Provide questions appropriate for the audience. People can and will answer just about any question put in front of them. This doesn’t mean that they are qualified to answer them, or are able to provide insightful feedback. A good way to check this is to ask yourself, “will my audience know the answer to that question”?
  • Avoid double negatives. Double negatives, particularly in combination with the available responses, can make answering questions difficult. Imagine the question:
image001

For a participant, the response “No, my manager is not non-responsive” could be a challenging idea.

  • Avoid questions that contain two concepts. For example:
image003

You may think your manager’s leadership is great but their communication skills could be improved. In that case, how do you decide on an answer? This also gets challenging when analysing the findings. Which skill does the manager need to work on if the rating is poor? All questions should relate to one concept. If required, add an extra question to explore the other concept.

  • Use balanced ratings scales. Use an equal number of positive and negative options—this relates to probability. With 4 options the natural spread would be 25% per answer, therefore if we have more positive options than negative we would increase the chances of getting positive feedback. An example of this would be:
Unbalanced rating scale

A balanced rating scale is shown below:

Balanced rating scale

With a balanced rating scale there is a greater chance of the results reflecting a participant’s true beliefs.

  • Avoid answers that overlap. An example of overlapping scale would be:
image009

Obviously for someone who is 24 years of age there are 2 options. The same advice goes for concepts.

  • Use open-ended questions. This allows us to better understand what is happening. It is great to use multiple-choice questions to gather proportions of feedback and priority and then ask the more probing, ‘Why’? A common use is to follow-up a satisfaction question. For example, follow-up the question “Overall, how satisfied were you with your experience of the website?” with ‘Why’? as an open text field. This can provide great insights into what was driving the feedback.
  • Use writing-for-the-web techniques. Using elements such as bolding key words, avoiding unnecessary copy and using a conversational tone can go a long way to making your survey more engaging and easier for participants to read and understand. For example, a question about gender is simplified in the second option below by removing unnecessary copy:
image011

(Note: Read Jessica Enders’ article for an in-depth exploration of how, when and why you should—or shouldn’t—ask for someone’s sex or gender in a survey).

  • Keep it short. There is often a temptation when writing surveys to add more areas for exploration. The problem is that they can become painfully long. A better approach is to keep the survey succinct and run another in a month or two.
  • Avoid asking about behaviour. While there is nothing stopping you from asking for feedback regarding behaviour there are better techniques for collecting this type of information that is likely to translate to making better design decisions. For example, trying to assess the effectiveness of myki (Melbourne’s poor public transport ticketing system), observing people buying tickets and travelling throughout the network would yield more accurate and useful feedback than asking people how they had used the system over the last week.
  • Include ‘don’t know’ options. There will be cases where participants legitimately don’t have an answer. It is likely to be more helpful to know that your audience don’t hold an opinion on a topic than forcing them into an answer, which can distort the picture by overestimating the positive or negative. The same can be said for neutral options in rating scales.

Once the survey has been written

It is a good idea to test the survey before launching it to your full audience. Initially this could be done with a colleague or someone from your organisation to pilot the survey. Don’t give them too much feedback on the survey background—only provide the information any potential participants would have. Give them clear direction in terms of the type of feedback you are looking for. Something along the lines of:

  • Are there any questions that didn’t make sense to you?
  • Are there any questions you couldn’t answer or were missing the answer you wanted to provide?

Once you are happy that the questions are clear and can be answered, launch the survey to a subset of your audience. When using a panel, go out to a subset of the total sample or when using an intercept survey (a pop-up on a live website) limit the proportion of visitors who will see the survey. Once you have checked that the questions are being completed as would be expected, go out to the whole sample.

Tools

There are many tools available for scripting and running surveys, ranging from light weight and inexpensive tools right through to specialist market research tools. The more comprehensive tools include greater functionality for including logic and routing in the survey as well as more powerful reporting functionality.

For most UX applications more simple surveys tools such as those discussed in the next section should offer adequate functionality to create surveys. My advice would be to keep surveys simple. A lot of time can be spent creating clever logic and routing within a survey but the more complex the survey, the greater the amount of testing required (a seemingly exponential increase). Often the benefit gained from the additional complexity of the survey does not reflect the time taken to set this up.

Below is a list of some of the survey tools on the market. This is not intended to be an exhaustive list, rather a place to start if you are interested writing and running a survey.

 SurveyMonkey SurveyGizmoWufoo
Cost (monthly)
All also offer annual option (not shown)
Free, $23, $75Free, $36, $123 & $218Free, $14.08, $29.08 $74.08 & $183.25
Collection methodsWeblink, email, Facebook, or embed on your site or blog & Enhanced security (SSL)*Weblink, email*, Facebook*, or embed on your site or blog*Weblink, Facebook, or embed on your site or blog & Enhanced security (SSL)*
Question types

15

22*

8

Question logic

Yes*

Yes*

Yes*

Question piping

Yes*

Yes*

Yes*

Analysis
  • Real-time results
  • Charts and graphs
  • Text analysis*
  • SPSS integration*
  • Multiple custom reports*
  • Filter & cross tabulate responses by custom criteria*
  • Download responses*
  • Create & download custom charts*
  • Real-time results
  • Charts and graphs
  • Text analysis*
  • SPSS export*
  • Multiple custom reports*
  • Filter & cross tabulate responses by custom criteria*
  • Download responses*
  • Create & download custom charts*
  • Scheduled Reports*
  • TURF Reports*
  • Real-time results
  • Charts and graphs
Notes:Offers good value.Of these tools it probably offers the most advanced functionality and this is reflected in their price of their higher end versions.Isn’t a dedicated survey tool however can be used for other applications. It only offers 10-point Likert scales, which may be inadequate for some.

*Only available in paid-for solutions.

Consider the following when choosing a survey tool:

  • If you work in a medium to large organisation, someone will already have access to a survey tool. Use it. It will save you money and the time spent trying to choose one. Try marketing, HR or market research teams.
  • If you plan to use a research panel for your sample, contact them and see which tools they can integrate with easily.
  • For all but the most basic of surveys expect to pay something for the tool. The costs are fairly low—for example SurveyMonkey and SurveyGizmo have $19 offerings which remove most restrictions to allow access to much of the functionality required.

Wrap-up

Surveys can be a really useful UX tool to provide input for the design process. The key to a successful survey is establishing the objectives and information required from the study up front, then making sure the questions asked cover them. Keep at the forefront of your mind the importance of creating a good experience for the participants by writing appropriate questions. Designing an effective survey is going to produce the best results.

Keep it short, keep the participant in mind when writing the questions and engage with your audience—good luck!

The post Better User Research Through Surveys appeared first on UX Mastery.

]]>
https://uxmastery.com/better-user-research-through-surveys/feed/ 11 5993