UX research – UX Mastery https://uxmastery.com The online learning community for human-centred designers Thu, 10 Sep 2020 07:30:49 +0000 en-AU hourly 1 https://wordpress.org/?v=6.2.2 https://uxmastery.com/wp-content/uploads/2019/12/cropped-uxmastery_logotype_135deg-100x100.png UX research – UX Mastery https://uxmastery.com 32 32 170411715 Best Practices for Agile UX Testing https://uxmastery.com/best-practices-for-agile-ux-testing/ https://uxmastery.com/best-practices-for-agile-ux-testing/#respond Thu, 19 Apr 2018 00:00:00 +0000 http://uxmastery.com/?p=65222 While we all dream of launching the perfect product or feature, it’s only through testing and iterating that we can refine and improve the user experience. Luckily, Agile development creates opportunities to test at every stage. Read on for tips on how you can fit testing into the Agile cylce.

The post Best Practices for Agile UX Testing appeared first on UX Mastery.

]]>
Every product manager’s dream is to launch a product or new feature that is perfect for their customer right out of the gate. But we know this is hardly ever the case.

It’s only through experience, and repeated user feedback using user experience testing, that we learn to tweak our product’s features and interface according to our users’ conscious or unconscious demands.

Over the last two decades, agile and sprint based development has enabled an efficient and effective product management – development team – design team feedback loop. The next stage in the Agile revolution is to add the customer and the user to this cycle, through Agile user experience testing (Agile UX Testing).

Test small, test often

Conducting market research and usability testing used to be an expensive and lengthy process. That’s why it was typically performed only once, or maybe twice during the design and development of new products and features. Since UX testing was often done towards the end of the product development cycle, the feedback gained was often used more as a “validation” exercise than an “exploratory” exercise.

It’s now possible to set up a UX test script in 5 minutes, receive qualitative picture-in-picture responses (webcam view recording of respondents + screen recording + audio + quantitative data) within hours, and add the customer and the user to the agile software development feedback loop.

You can even run a user test for every sprint, and for every design and development iteration. This iterative process avoids the risk of putting together a complete prototype and realising far too late that there is a flaw in the design that should have been dealt with in the early development stages

You don’t need to run these tests with large sample sets. It’s a rule of thumb within the usability industry that 5 participants going through a qualitative picture-in-picture recording session like that of Userlytics.com will allow you to uncover 80% of the usability and user experience issues in a design.

See who is testing your product

It’s important to always have your target persona front of mind. It helps you look at your product through the lens of your customer – as much as possible.

However, it’s impossible to fully immerse yourself as your own customer no matter how hard you try. If you can’t be them, watch them. The best feedback you can get on your product is from the users its meant for.

Online remote tests with only screen recording and audio don’t allow you to know if the person providing the feedback is truly your target persona. Including a webcam view of participants adds a whole new level of depth and details to your qualitative UX testing insights. You can start to verify whether the person is actually your target persona, and better understand their contextual surroundings as they interact with your UI and answer questions. By being able to visually analyse the tester you can track their real-time emotional/physiological reactions to the product.

It’s always better to actually see people use your product. 

When launching a test with Userlytics, you can use demographic filters and set screener questions to ensure their global participant panel provides you with a tester that fits your persona. They also track participant location via their IP, and review every result to reject or approve it through their QA process.

Getting feedback on your product is always great, but it’s even better when the feedback is from participants that match your target persona.

Moderated vs. unmoderated vs. hybrid

For lab-based user testing, moderated testing (a UX researcher guiding the respondent) used to be the norm. The advent of new technologies allowed UX researchers to conduct moderated sessions on a remote, online basis, using screen sharing platforms like GoToMeeting, Webex, Zoom, Skype, Hangouts and so on. The problem is scalability. UX design researchers only have so much time to moderate target participants, let alone conduct the project management and scheduling and logistics required to manage a moderated usability testing process.

When Userlytics and its peers in the industry invented unmoderated usability testing, the economics, time requirements and scalability of user testing advanced by orders of magnitude. The drawback? A rigid non-personalisable test script.

In other words, a single UX researcher could manage tests of hundreds or even thousands of respondents, anywhere in the world, at a fraction of the time, and cost required for moderated or in-lab UX testing. However, it was not possible to adapt the unmoderated test script according to the answers and actions of each respondent, which would be possible with moderated usability testing.

But that problem has now been solved, through conditional logic (“branching” or “skipping” logic). The most advanced UX testing platforms have a hybrid approach that marries the scalability, speed and economics of unmoderated user testing with the personalisation of moderated user testing by leveraging branching logic. When branching logic is applied to a given task/question in the test script, it redirects the tester to a new task/question depending on their response to the prior question or task. Through branching logic you can essentially replace the moderator’s function through creating a customised set of instructions to different testers depending on their actions.

Branching logic creates more personalised tests.

Efficient analysis

In an agile sprint where time is short, you need to get to your analysis quickly. Rapidly and economically launching a user experience test is only part of the equation. Reviewing and analysing the results, and decision-making in a timely manner is the other half. If you launched a test with 30-minute sessions of 10 participants, you and your colleagues have 5 hours of video to watch, analyse, make annotations, and pull insights so as to enable product and UI optimisation. 100 participants would imply 50 hours. This time adds up quickly.

When you’re working in sprints, you needs results quickly.

Your user testing platform should provide the tools you need to quickly review participant sessions, leverage searchable, time stamped & hyperlinked audio transcriptions to locate the most interesting actions and comments, add & share hyperlinked and shareable annotations, and create & share highlight reels.  

Benchmark your product

Learning from how your users engage with your product is one way to ensure a positive customer experience. It’s also important, however, to see how your product compares to your competitors through benchmarking.

Benchmarking your prototype designs against each other, against existing production assets, and against your competition will allow you to identify additional opportunity areas where you can improve your usability design to create a superior customer experience.

The best way to do this is using pre-formatted system usability scale questions with automatic calculations of the resulting score. You can also use comparison metrics such as net promoter score, time on task, and success/failure, which allow you to quantitatively measure your usability and user experience against different design iterations and against the competition, or best practice websites and apps. “If you are not measuring, you are not managing”

TestFlight and Google Play equivalent

Testing your product early in the process is essential. However, once your mobile app prototype reaches a high fidelity stage and is on the Appstore or Google Play, you need to have a process for testing the apps prior to production.

Some platforms use testing apps, which may have the drawback of incompatibility with the tested app and the Testflight app. Others like Userlytics use an approach that does not require any kind of SDK or testing app, therefore avoiding potential usability testing pitfalls.

Conclusion

An agile UX testing approach takes an iterative customer-centric approach, providing many small sample tests with your target persona at every stage of the product lifecycle allowing you to optimize your product prior to launch. Like having your customers form a seamless whole with your sprint teams. This strategy reduces time and money wasted in design and development and results in a much higher customer engagement and satisfaction score.

If you want to learn more about agile UX, or how to launch an agile UX testing process you can schedule a demo here

We just wanted to let you know this is a sponsored post. Every now and then, we partner with people and companies doing awesome things.

The post Best Practices for Agile UX Testing appeared first on UX Mastery.

]]>
https://uxmastery.com/best-practices-for-agile-ux-testing/feed/ 0 65222
Transcript: Ask the UXperts: Efficiently Organise and Utilise Your Research Findings — with Benjamin Humphrey https://uxmastery.com/transcript-efficiently-organise-research-findings/ https://uxmastery.com/transcript-efficiently-organise-research-findings/#respond Thu, 08 Mar 2018 23:16:41 +0000 http://uxmastery.com/?p=64832 Benjamin Humphrey joined us on Slack to share practical solutions to help you use your findings effectively. Here is the full transcript in case you missed it.

The post Transcript: Ask the UXperts: Efficiently Organise and Utilise Your Research Findings — with Benjamin Humphrey appeared first on UX Mastery.

]]>
Efficiently organising research findings so that we can effectively use them to their greatest benefit is often a pain point. Luckily help is at hand, in the form of Benjamin Humphrey.

Benjamin is co-founder of Dovetaila new product that helps teams understand their customers through analysis of user feedback and qualitative research.

We were lucky to have the opportunity to pick Benjamin’s brain in our Slack channel yesterday. It was one of the busiest sessions we’ve hosted but he managed like a trooper.

If you’re interested in seeing what we discussed, or you want to revisit your own questions, here is a full transcript of the chat.

Transcript

hawk
2018-03-07 23:04
The formal intro:

hawk
2018-03-07 23:04
Benjamin is a co-founder of Dovetail, a new product that helps teams understand their customers through organization and analysis of user feedback and qualitative research. Dovetail is kind of like Google Docs meets Trello, designed specifically for researchers and product managers.

hawk
2018-03-07 23:04

claudia.realegeno
2018-03-07 23:04
Do you find it easier to structure by primarily by participant, by event, or some other method?

hawk
2018-03-07 23:04
Prior to starting Dovetail, Benjamin was a lead designer at Atlassian working on JIRA Agile, the growth team, and Atlassian’s cloud platform. He led design initiatives to bring consistency and modernity to Atlassian’s cloud offerings and was heavily involved in shaping Atlassian’s new design language, “ADG 3”, and their new product Stride.

Benjamin is a multi-disciplinary designer working across research, user experience, interface design, and frontend development.

hawk
2018-03-07 23:05
Thanks heaps for your time today @benjamin – we appreciate it.

hawk
2018-03-07 23:05
Can you give us some history and a brief intro on the topic?

hawk
2018-03-07 23:05
Then we’ll get into questions.

benjamin
2018-03-07 23:05
Hey everyone!

benjamin
2018-03-07 23:05
Thanks for joining :slightly_smiling_face:

krisduran
2018-03-07 23:05
Thank you @benjamin for doing this today and sharing your experience!

benjamin
2018-03-07 23:06
As @hawk mentioned I’m a product designer, ex-Atlassian, and now founder / CEO of a SaaS startup focused on building a great product for teams to manage customer feedback & user research.

taraleeyork
2018-03-07 23:06
Hi everyone

benjamin
2018-03-07 23:06
I’d love to talk about anything to do with research, product design, and generally just building great products since that’s my passion.

benjamin
2018-03-07 23:06
To give you a few ideas for topics: advocating research inside a data-driven organization, the relationship between designers / researchers / PMs, collecting, storing, organizing, and analyzing data, sharing knowledge and getting buy-in with stakeholders, escaping the daily grind and setting long term visions, design / research team org structure, and more.

kaselway
2018-03-07 23:06
Well! There’s 7k people here so it’s a bit of chaos!

hawk
2018-03-07 23:07
Cool. Are you ready for questions?

benjamin
2018-03-07 23:07
Specifically the topic is about research data organization / sharing – but I’m also happy to expand beyond that if you have more general questions for me about design or reseach :slightly_smiling_face:

benjamin
2018-03-07 23:07
@hawk yep!

hawk
2018-03-07 23:07
Ok team, shoot…

hawk
2018-03-07 23:07
From @rachelreveley What can you do when you use various tools to create different deliverables such as Google Slides, Axure, Foundation etc?

maadonna
2018-03-07 23:08
How do you avoid re-researching the same things over and over? i.e. how do you make old research information available to start with, and only researching what you don’t know (I have never seen a team do this well – everyone just seems to re-research)

benjamin
2018-03-07 23:08
Hmm. What do you mean by “what can you do”? As in, how can you consolidate everything into a single deliverable / outcome?

taraleeyork
2018-03-07 23:08
What do you do when a client/employer tells you they don’t have a budget for research?

frankenvision
2018-03-07 23:09
Q: What inspired you to create Dovetail?

benjamin
2018-03-07 23:09
I think one of the problems I’ve seen in research is that there isn’t really a ‘standardised’ set of tools that researchers use. Unlike designers, which have Sketch / Photoshop / InVision emerging as the platform. Researchers still have a really disparate collection of digital and physical tools

benjamin
2018-03-07 23:09
They also tend not to talk to one another

rachelreveley
2018-03-07 23:09
Yes. I find that I end up with lots of very different pieces and have to somehow link them together

rvaelle
2018-03-07 23:10
And tips on being efficient on organizing and analyzing data? Not getting overwhelmed with data.:fearful:

benjamin
2018-03-07 23:10
@rachelreveley Right. I don’t feel like I have a great solution for you, to be honest. I think the variety in process / methods / and tuning the output to the stakeholders means that the number and type of tools you’ll use varies so much between projects

isha
2018-03-07 23:10
Wow – that’s a lot!

benjamin
2018-03-07 23:11
In the past everything tends to end up in a document or slideshow

benjamin
2018-03-07 23:11
which is not ideal, imo

benjamin
2018-03-07 23:11
part of the issue is that the raw data is disconnected from the output

rachelreveley
2018-03-07 23:11
They do. the closest to a solution so far is Basecamp but I’m not a huge fan.

benjamin
2018-03-07 23:12
Until there’s something that can suck in a bunch of data in different formats and let you manipulate that, analyze it, distill it, then spit it out as a great output for stakeholders, I think you’re a bit stuck with what we have today

benjamin
2018-03-07 23:12
At Atlassian we talked a lot about the “IDE” for people

benjamin
2018-03-07 23:12
Using that metaphor of developer IDE’s who have lots of powerful features

benjamin
2018-03-07 23:12
What’s the IDE for designers? PMs? Researchers?

benjamin
2018-03-07 23:13
I don’t think there’s a strong story yet for the latter

benjamin
2018-03-07 23:13
But you can see software emerging for the first two

benjamin
2018-03-07 23:13
Anyway, I’ll move on!

jamie
2018-03-07 23:13
What do you find is the best way to present your findings not only to stakeholders but to team members both in design and tech streams?

danielle
2018-03-07 23:13
What’s IDE?

james.g.jenner
2018-03-07 23:13
IDE = Integrated Development Environment.

benjamin
2018-03-07 23:14
@claudia.realegeno I’m architecting an in-house database to store research findings and struggling with how to incorporate tagging capabilities and account for events where there were multiple attendees. How do you handle these challenges in a world of normalized databases? Do you find it easier to structure by primarily by participant, by event, or some other method?

guido
2018-03-07 23:14
Intentionally Difficult Employees

guido
2018-03-07 23:14
oh

guido
2018-03-07 23:14
well, almost got it

benjamin
2018-03-07 23:14
@claudia.realegeno Your first part of the question might be a bit complicated for me to answer here. But the second part I can have a crack at. I think it really depends whether a) you’re doing a research project, with an end date, or b) you’re embedded in a team and you’re doing ongoing research.

benjamin
2018-03-07 23:15
Also if you’re doing strategic / tactical research

claudia.realegeno
2018-03-07 23:15
ongoing research

benjamin
2018-03-07 23:15
For instance, if you have a specific goal or outcome in mind

benjamin
2018-03-07 23:15
Right

benjamin
2018-03-07 23:15
So, user testing sessions, interviews, etc?

bkesshav
2018-03-07 23:15
Is there any tool that use AI and machine learning to highlight key findings and recommend areas to focus as pain points?

claudia.realegeno
2018-03-07 23:15
sometimes we have a clear measurable goal, sometimes it’s more qualitative

claudia.realegeno
2018-03-07 23:16
We’d like the flexibility for both, and even just grabbing ad-hoc statements

benjamin
2018-03-07 23:16
I think the general idea you want to get to then, with ongoing research, is building up a bit of a library of themes that you’re observing over time, beyond the specific individual events

benjamin
2018-03-07 23:16
At Atlassian, researchers are embedded inside product teams

claudia.realegeno
2018-03-07 23:16
yes, exactly!

benjamin
2018-03-07 23:16
So across a bunch of different methods, they’re forming these patterns / themes over time, and it’s somewhat irregardless of the actual method they used to discover those insights

benjamin
2018-03-07 23:17
Generally they’ll write up some stuff, maybe on a cadence, or perhaps have an ongoing short meeting, to then present the outcome of the events as evidence to support a more macro theme

benjamin
2018-03-07 23:18
So I would say, for ongoing research, you probably want to structure by theme as you go (you won’t start out with themes at the beginning) and then use the specific events as evidence

krisduran
2018-03-07 23:18
Do you have a recommendation on how to present data when talking with stakeholders?

benjamin
2018-03-07 23:18
@maadonna How do you avoid re-researching the same things over and over? i.e. how do you make old research information available to start with, and only researching what you don’t know (I have never seen a team do this well – everyone just seems to re-research)

benjamin
2018-03-07 23:18
Heh

benjamin
2018-03-07 23:18
This is like the biggest struggle that the Atlassian researchers had when I left

benjamin
2018-03-07 23:19
I think everyone struggles with this, especially growing companies where you have new people joining all the time

benjamin
2018-03-07 23:19
IMO the problem comes down to bad tooling for storing research insights

benjamin
2018-03-07 23:19
Too much reliance of “tribal knowledge” of long time employees, who would say something like, “hang on, didn’t we do this a while ago?” but you wouldn’t know that without them jumping in

jamie
2018-03-07 23:20
can you speak a bit about different methods you use to synthesize and document qualitative data

benjamin
2018-03-07 23:20
Part of the challenge is that the type of data you touch with research is so varied that no system handles it all perfectly. One product that works great for storing emails from customers or interview notes might not work for video. Another which is great for video might not work for tweets or survey results.

maadonna
2018-03-07 23:20
I’d be interested in hearing how anyone does this :slightly_smiling_face:

dorothee
2018-03-07 23:20
What do you do when you’re asked to provide a UX budget estimate for an upcoming product release, but you only have a very high-level idea of what the release theme is going to be?

benjamin
2018-03-07 23:21
@maadonna At Atlassian we had some success with organising things into “FAQ” style pages by product

benjamin
2018-03-07 23:21
Where you kind of start with the question and that links off to the research

krisduran
2018-03-07 23:21
Q: Do you find storytelling a key part of presenting data to non-research folks?

benjamin
2018-03-07 23:21
So if you had a question like, fairly generic, “What do people do in their first 5 days of using JIRA?” that might then link to some research on onboarding

benjamin
2018-03-07 23:21
But there are so many problems with this

benjamin
2018-03-07 23:21
It requires constant maintenance

benjamin
2018-03-07 23:21
It gets out of date

benjamin
2018-03-07 23:22
It also requires people to use the same formatting so you can compare apples to apples

krisduran
2018-03-07 23:22
Q: When do you know you’ve got enough data and need to pull back out of the rabbit hole?

benjamin
2018-03-07 23:22
Data repositories are kind of a way to solve it

benjamin
2018-03-07 23:22
But

benjamin
2018-03-07 23:23
The data itself is also quite messy in its original form so the repository ends up being tucked away out of view from stakeholders because it’s a total mess.

benjamin
2018-03-07 23:23
You really need some way to say, “hey, here’s my raw data, and it’s really messy, but I can take excerpts out of that and add them into something that’s more bite-sized and shareable.”

frankenvision
2018-03-07 23:23
Q: What do you do with results of your research when you realized you’ve headed in the wrong direction on a project?

bkesshav
2018-03-07 23:24
Q: Is there any tool that use AI and machine learning to highlight key findings from research and recommend areas to focus as pain points?

benjamin
2018-03-07 23:24
So yeah, I think, in larger companies, it’s a tooling problem. I think it’s probably only really a problem in larger companies anyway, because in a smaller organisation, you’ll have less researchers / designers who probably talk more and can hold more in their heads.

benjamin
2018-03-07 23:24
Heh

benjamin
2018-03-07 23:24
Popular topic

benjamin
2018-03-07 23:24
Okay, next one

benjamin
2018-03-07 23:24
@taraleeyork What do you do when a client/employer tells you they don’t have a budget for research?

benjamin
2018-03-07 23:24
Hmm. My co-founder sitting next to me says “offer them a trial”

benjamin
2018-03-07 23:24
Haha

benjamin
2018-03-07 23:24
No, I think, it really depends

benjamin
2018-03-07 23:25
If you’re really passionate about research for this project

benjamin
2018-03-07 23:25
Then I think you’ll want to find some way to do it sneakily on the fly

benjamin
2018-03-07 23:25
Even a few structured customer interviews, recorded, can be proof of the value of research

aquazie
2018-03-07 23:25
agreed on sneaking in, if needed

benjamin
2018-03-07 23:26
So for a couple of hundred dollars, you should be able to recruit maybe three people for 30 minute interviews

benjamin
2018-03-07 23:26
Then it’s just saying “the proof is in the pudding” right

benjamin
2018-03-07 23:26
We used this tactic A LOT at Atlassian

benjamin
2018-03-07 23:26
Especially a couple of years ago when research was starting to mature

benjamin
2018-03-07 23:27
Atlassian has gone through a stage of no designers → convincing the value of design → no researchers → convincing the value of research

benjamin
2018-03-07 23:27
And a lot of that was simply doing it, even if there wasn’t budget for it

benjamin
2018-03-07 23:27
Not the best answer, but yeah, that’s just the reality of organisational politics I guess

benjamin
2018-03-07 23:28
@frankenvision Q: What inspired you to create Dovetail?

benjamin
2018-03-07 23:28
I actually wrote a blog series on the beginnings of Dovetail

frankenvision
2018-03-07 23:28
Thanks @benjamin I will check it out

benjamin
2018-03-07 23:28
So for the full story I guess read that, but the abridged version is that I noticed a distinct lack of decent software for researchers when I worked at Atlassian

benjamin
2018-03-07 23:28
Research software, quite frankly, sucks

taraleeyork
2018-03-07 23:29
Thanks for the answer @benjamin

benjamin
2018-03-07 23:29
Ironically it’s often poorly designed and hella expensive

tyler
2018-03-07 23:29
Q: What are your views on prioritizing Quantitative Data over Qualitative User interviews for a consumer product?

benjamin
2018-03-07 23:29
It’s also a huge opportunity because it’s so far reaching

benjamin
2018-03-07 23:30
We think about the key tent pegs of research – collection, organization, analysis, and sharing

benjamin
2018-03-07 23:30
In each of those, you have a variety of tools

benjamin
2018-03-07 23:30
Survey software, data repositories, QDA tools, collab tools

benjamin
2018-03-07 23:30
Nobody has really flipped those verticals into one horizontal, integrated path

benjamin
2018-03-07 23:31
So that’s kind of the realization I had

benjamin
2018-03-07 23:31
@rvaelle Any tips on being efficient on organizing and analyzing data? Not getting overwhelmed with data.

cindy.mccracken
2018-03-07 23:31
Are you able to take study notes in Dovetail? Observers too?

benjamin
2018-03-07 23:31
Hmm. Being quite ruthless in what you keep around.

benjamin
2018-03-07 23:31
For instance, take a user testing session.

benjamin
2018-03-07 23:32
You might have 30 min of video there, but how much of that is setting up, introductions, technical issues, etc.

benjamin
2018-03-07 23:32
So maybe cut your user testing videos into a “highlight reel” and you’ll have less noise in your data

benjamin
2018-03-07 23:32
Also, I like the whole “insight as a tweet” thing

benjamin
2018-03-07 23:32
I’ve seen a lot of researchers write these really long internal blog posts or presentations

benjamin
2018-03-07 23:32
And they’re really ineffective IMO

benjamin
2018-03-07 23:33
The most successful approach I’ve seen is simply showing stakeholders actual quotes from customers or video from user testing.

benjamin
2018-03-07 23:33
For instance, at Atlassian, instead of creating research reports, I used to buy popcorn for our team and invite everyone (PM, developers, QA) along to watch pre-recorded user testing videos. After each one we’d discuss them together and take a few quick notes. Everyone knew what the problems were and the next steps. No need for a presentation or a report.

benjamin
2018-03-07 23:33
Let the data speak for itself

cindy.mccracken
2018-03-07 23:33
In a couple companies where I’ve worked, the best way to make sure research is kept top of mind is writing stories for the backlogs. Then they get prioritized with the rest of the work.

benjamin
2018-03-07 23:34
@jamie What do you find is the best way to present your findings not only to stakeholders but to team members both in design and tech streams?

benjamin
2018-03-07 23:34
Nice segue there

benjamin
2018-03-07 23:34
I can rattle off another couple of examples of techniques I used at Atlassian

benjamin
2018-03-07 23:34
I had lots of success bringing developers along with me on contextual inquiries or having them sit in on interviews. Assign them a role like photographer or note-taker. They love it and they can experience customer pain first hand.

benjamin
2018-03-07 23:35
Another technique I used at Atlassian was to set up a HipChat room and connect it to Twitter using IFTTT. All it did was show all the tweets mentioning @JIRA on Twitter, and spoiler, most of them were not happy tweets.

benjamin
2018-03-07 23:35
This brought customer pain in front of the team in the tools they use every day. We even put incoming user feedback on wallboard televisions alongside the developer’s build status.

benjamin
2018-03-07 23:35
I think the most effective researchers are the ones that simply act as a messenger for the data / evidence from the customer / users in the research

benjamin
2018-03-07 23:35
In some ways you’re kind of like a director of a movie

benjamin
2018-03-07 23:36
You have all of these clips on the cutting room floor

benjamin
2018-03-07 23:36
You need to take those and edit them into what you’re going to show, fit it into 1.5 hours

benjamin
2018-03-07 23:36
(hopefully a lot less than that)

frankenvision
2018-03-07 23:36
Q: How do you sort through pain points once you find them? Do you put them in a severity chart and vote on them with your team?

hawk
2018-03-07 23:37
FYI We have 10 questions queued up which will likely take us to the end of the session

benjamin
2018-03-07 23:37
Time is flying!

tyler
2018-03-07 23:37
I create a sortable excel sheet

benjamin
2018-03-07 23:37
@bkesshav Is there any tool that use AI and machine learning to highlight key findings and recommend areas to focus as pain points?

benjamin
2018-03-07 23:37
I don’t think there is any software that can do what researchers do today

benjamin
2018-03-07 23:38
There’s lots of ML that can *help* you get insights

benjamin
2018-03-07 23:38
For example, we just shipped automatic sentiment analysis yesterday

benjamin
2018-03-07 23:38
This is kind of helpful for parsing large amounts of data

benjamin
2018-03-07 23:38
It gives you a bit of a starting point to work from, everything strongly negative is in one place

benjamin
2018-03-07 23:39
Unless you have an enormous data set (which most companies do not), ML will not be able to uncover key findings / distill insights etc from a variety of raw data

benjamin
2018-03-07 23:39
I think eventually we might get to “black box research” but empathy and context are so important for research

davidbaird
2018-03-07 23:39
parsing is an interesting term. :slightly_smiling_face:. There in lies the appropriate degree of ‘filtering’

benjamin
2018-03-07 23:39
So I think computers can absolutely aid researchers

benjamin
2018-03-07 23:40
And there is not enough of that today IMO

cindy.mccracken
2018-03-07 23:40
I like this idea, but you’d need to capture those next steps somewhere, right?

benjamin
2018-03-07 23:40
But I don’t think researchers need to worry about being replaced by ML / AI

benjamin
2018-03-07 23:40
@krisduran Do you have a recommendation on how to present data when talking with stakeholders?

benjamin
2018-03-07 23:41
Somewhat covered above – keep it simple, brief, present the raw data / evidence where possible, stay away from long presentations. In Dovetail, the idea is that the raw data is stored alongside your insights, and then that can be shared with stakeholders to collaborate on. So then they can just click around and explore the insights, and dive into the raw data if necessary. It removes the disconnect between what’s in Powerpoint vs. what’s in your spreadsheet or Dropbox.

benjamin
2018-03-07 23:41
Another technique that I’ll quickly mention is to involve them throughout the process

benjamin
2018-03-07 23:41
This isn’t always feasible

benjamin
2018-03-07 23:41
But if it is possible, (same goes for design), it’s great if you can have your team involved in collection / analysis etc.

benjamin
2018-03-07 23:42
Again at Atlassian we tried to do this where possible

benjamin
2018-03-07 23:42
Turns out a developer is going to be much more likely to be excited about a new feature if she’s been invovled in the design process from the start

benjamin
2018-03-07 23:42
@dorothee What do you do when you’re asked to provide a UX budget estimate for an upcoming product release, but you only have a very high-level idea of what the release theme is going to be?

frankenvision
2018-03-07 23:43
Q: How many researchers did you work with at Atlassian?

benjamin
2018-03-07 23:43
Tell them estimation is hard and add 50% ?

benjamin
2018-03-07 23:43
I’m not sure, to be honest!

benjamin
2018-03-07 23:43
That’s what developers do to me all the time, so maybe it should go the other way too :joy:

benjamin
2018-03-07 23:43
@krisduran Q: Do you find storytelling a key part of presenting data to non-research folks?

benjamin
2018-03-07 23:43
Yep, absolutely!

benjamin
2018-03-07 23:44
At Atlassian, every year, the design / research / writing team come together from around the world in Sydney and have a week together

benjamin
2018-03-07 23:44
I’ll find the video, hang on

bkesshav
2018-03-07 23:44
I didn’t ask if AI can replace researchers, can technology like AI infer and create insights from the research outcomes.

Most time is spent looking in to the raw data and research findings. Can technology use the data to make the process of analysis and drawing insights.

benjamin
2018-03-07 23:45
Anyway, the theme from a couple of years back was storytelling

benjamin
2018-03-07 23:45
I think it’s a critical skill for designers and researchers, and PMs. Everyone, really.

benjamin
2018-03-07 23:45
You need to take people on a journey, build empathy with characters (often the users), and propose a solution

benjamin
2018-03-07 23:45
It’s somewhat like making a film. Pixar are very good at this. Channel Pixar in your research!

benjamin
2018-03-07 23:46
@bkesshav Right. My answer would be not right now, but in a few years, possible. At the moment the ML / natural language stuff is mostly helpful for broadly categorising large sets of data.

benjamin
2018-03-07 23:46
To get true insights you need a human touch to understand the context and the goal of the research

benjamin
2018-03-07 23:46
@krisduran Q: When do you know you’ve got enough data and need to pull back out of the rabbit hole?

benjamin
2018-03-07 23:47
Good question. When you start seeing the same things over and over.

benjamin
2018-03-07 23:47
In theory, the obvious themes will emerge quite quickly during your research.

benjamin
2018-03-07 23:48
It also depends a lot on how rigorous you want to be

benjamin
2018-03-07 23:48
Often, with research, you’re not looking for statistical significance

benjamin
2018-03-07 23:48
There’s usually no need for that level of certainty

benjamin
2018-03-07 23:48
Research is very helpful as a quick, lean, and directional approach a lot of the time

benjamin
2018-03-07 23:48
I’d recommend Erika Hall’s book Just Enough Research

benjamin
2018-03-07 23:48
Which is entirely devoted to this topic

benjamin
2018-03-07 23:49
@frankenvision Q: What do you do with results of your research when you realize you’ve headed in the wrong direction on a project?

benjamin
2018-03-07 23:49
If the data is valuable, keep it, and maybe write a brief summary of what you learned, even if it’s not relevant for the project.

benjamin
2018-03-07 23:49
Again depends on whether you’re embedded, doing ongoing research, or whether you’re working on a once-off project

benjamin
2018-03-07 23:50
If it’s completely worthless and will be in the future, then chuck it. Don’t fall into the sunk cost fallacy.

benjamin
2018-03-07 23:50
@tyler What are your views on prioritizing Quantitative Data over Qualitative User interviews for a consumer product?

benjamin
2018-03-07 23:50
Spicy question!

frankenvision
2018-03-07 23:50
thanks

benjamin
2018-03-07 23:50
I don’t think there’s any need to prioritize one over another

benjamin
2018-03-07 23:50
They’re very different

benjamin
2018-03-07 23:51
A huge myth in software development is that these two things compete against one another

benjamin
2018-03-07 23:51
That couldn’t be further from the truth

benjamin
2018-03-07 23:51
Quant can tell you *what* users are doing, but qual can tell you *why*

benjamin
2018-03-07 23:51
I wrote a wee piece on this: https://dovetailapp.com/guides/qual-quant

benjamin
2018-03-07 23:52
There’s a whole topic here, in itself, which is using qual and quant data in software development

benjamin
2018-03-07 23:52
humans love certainty

benjamin
2018-03-07 23:52
people think quantitative data brings certainty

benjamin
2018-03-07 23:53
but often, it’s really misleading / open to interpretation

hawk
2018-03-07 23:53
You’re rocking this @benjamin

benjamin
2018-03-07 23:53
There’s been a huge trend the past few years

hawk
2018-03-07 23:53
We have 2 questions left and we’ll call it a wrap

benjamin
2018-03-07 23:53
Companies think quantitative data has become a “solution” for a lot of people, a silver bullet

benjamin
2018-03-07 23:53
Partly because it’s been much more accessible

benjamin
2018-03-07 23:53
Before we had Mixpanel, GA, etc.

benjamin
2018-03-07 23:54
We had to talk to users, talk to customers

benjamin
2018-03-07 23:54
These tools made quant much easier to access, and since humans love certainty, they seemed to provide it

benjamin
2018-03-07 23:54
As someone who worked on growth / analytics at Atlassian, I can assure you that analytics are often anything but certain

benjamin
2018-03-07 23:55
There’s a bit of a renaissance happening now I think

benjamin
2018-03-07 23:55
A few years back, the 4th or 5th hire in your startup would be a data analytics / growth person

benjamin
2018-03-07 23:55
Now I’m seeing more and more Dovetail customers who are startups with researchers as that hire

benjamin
2018-03-07 23:55
@cindy.mccracken Are you able to take study notes in Dovetail? Observers too?

benjamin
2018-03-07 23:55
Yep. Not 100% sure what you mean by observers, but it has a real time collab editor, like Google Docs.

benjamin
2018-03-07 23:56
@frankenvision Q: How do you sort through pain points once you find them? Do you put them in a severity chart and vote on them with your team?

cindy.mccracken
2018-03-07 23:56
Yeah, that’s what I mean.

benjamin
2018-03-07 23:57
@frankenvision Yeah, sort of. It kind of depends on the team. With a newer team, you’ll need more structure, so probably some card sorting or meetings to prioritise what to work on. If the team is smaller, or more established, then you’ll probably have more trust, so maybe the researcher can just suggest an ordered list of pain points to work through.

benjamin
2018-03-07 23:58
I can show you a screenshot of our customer feedback board on Dovetail

benjamin
2018-03-07 23:58
The tags, that is

benjamin
2018-03-07 23:58

benjamin
2018-03-07 23:59
This is basically how we manage our pain points / customer feedback

benjamin
2018-03-07 23:59
So everything is tagged, then we use the board to group the tags into product areas or existing vs. new feature

benjamin
2018-03-07 23:59
Then rank them

benjamin
2018-03-07 23:59
So something similar to that is probably a good way to sort / organize your pain points – either on a post-it note board, or Trello, or Dovetail if you want to try that

benjamin
2018-03-08 00:00
That was the last question, I think!

hawk
2018-03-08 00:00
Nice!

benjamin
2018-03-08 00:00
I can stick around for a few more minutes, if anyone has anything pressing

hawk
2018-03-08 00:00
That was pretty full on but you killed it.
__end transcript__

benjamin
2018-03-08 00:00
Or maybe a follow up from anything I said?

frankenvision
2018-03-08 00:00
That was a great session, thanks

hawk
2018-03-08 00:00
Much appreciated.

The post Transcript: Ask the UXperts: Efficiently Organise and Utilise Your Research Findings — with Benjamin Humphrey appeared first on UX Mastery.

]]>
https://uxmastery.com/transcript-efficiently-organise-research-findings/feed/ 0 64832
Choosing the Right UX Research Method https://uxmastery.com/choosing-right-ux-research-method/ https://uxmastery.com/choosing-right-ux-research-method/#comments Fri, 26 Jan 2018 05:22:07 +0000 http://uxmastery.com/?p=63885 As more and more organisations become focused on creating great experiences, more teams are being tasked with conducting research to inform and validate user experience objectives.

UX research can be extremely helpful in crafting a product strategy and ensuring that the solutions built fit users’ needs. But it can be hard to know how to get started. This article covers all the basics: from setting research objectives to choosing the method so you can uncover the information you need.

The post Choosing the Right UX Research Method appeared first on UX Mastery.

]]>
As more and more organisations become focused on creating great experiences, more teams are being tasked with conducting research to inform and validate user experience objectives.

UX research can be extremely helpful in crafting a product strategy and ensuring that the solutions built fit users’ needs, but it can be hard to know how to get started.  This article will show you how to set your research objectives and choose the method so that you can uncover the information you need.

When to do research

The first thing to know is that there is never a bad time to do research. While there are many models and complicated diagrams to describe how products get built, essentially, you’re always in one of three core phases: conceptualising something brand new, in the middle of designing and/or building something, or assessing something that’s already been built.

There’s plenty to learn in each of those phases. If you’re just starting out, you need to focus on understanding your potential users and their context and needs so that you can understand your best opportunities to serve them. In other words, you’re trying to figure out what problems to solve and for whom. This is often called generative or formative research.

Research can add value at any stage, whether that’s conceptualising, designing or refining.

Once you’re actively building something, you’ll shift your focus to analysing the solutions that you’re coming up with, and making sure that they address the needs of your users. You’ll want to assess both conceptual fit and specific interactions quality.  We usually call this evaluative research.

When you have a live product or service, you’ll want to continue to assess how well you’re serving people’s needs, but you’ll also want to use research to discover how people change and how you can continue to provide value. At this point, you’ll be doing a mix of the generative type of work that is generally in the conceptual phase and evaluative work.

There is no cut-and-dried guide of exactly what methods to employ when, but there should never be a time that you can’t find an open question to investigate.

Determine your specific research objectives

At any given time, your team might have dozens of open questions that you could explore. I recommend keeping a master list of outstanding open questions to keep track of possible research activities, but focusing on answering just one open question at a time. The core goal of a study will determine which method you ultimately use.

If you need help coming up with research goals, consider things like:

  • the stage of the project you’re in
  • what information you already know about your users, their context, and needs
  • what your business goals are
  • what solutions already exist or have been proposed
  • or where you think there are existing issues.

The questions might be large and very open, like “who are our users?” or more targeted things like “who uses feature x most?” or “what colour should this button be?” Those are all valid things to explore, but require totally different research methods, so it’s good to be explicit.

Once you’ve identified open questions, you and the team can prioritise which things would be riskiest to get wrong, and therefore, what you should investigate first. This might be impacted by what project phase you’re in or what is currently going on in the team. For instance, if you’re in the conceptual phase of a new app and don’t have a clear understanding of your potential user’s daily workflows yet, you’d want to prioritize that before assessing any particular solutions.

From your general list of open questions, specify individual objectives to investigate. For instance, rather than saying that you want to assess the usability of an entire onboarding workflow, you might break down the open questions into individual items, like, “Can visitors find the pricing page?” and “Do potential customers understand the pricing tiers?”

You can usually combine multiple goals into a single round of research, but only if the methods align. For instance, you could explore many different hypotheses about a proposed solution in a single usability test session. Know that you’ll need to do several rounds of different types of research to get everything answered and that is totally OK.

Looking at data types

After determining your research goal, it’s time to start looking at the kind of information you need to answer your questions.

There are two main types of data: quantitative and qualitative.

Quantitative data

Quantitative data measures specific counts collected, like how many times a link was clicked or what percentage of people completed a step. Quantitative data is unambiguous in that you can’t argue what is measured. However, you need to understand the context to interpret the results.

Quantitative data helps us understand questions like: how much, how many and how often?

For instance, you could measure how frequently an item is purchased. The number of sales is unchangeable and unambiguous, but whether 100 sales is good or bad depends on a lot of things. Quantitative research helps us understand what’s happening and questions like: how much, how many, how often. It tends to need a large sample size so that you can feel confident about your results.

Common UX research methods that can provide quantitative data are surveys, a/b or multivariate tests, click tests, eye tracking studies, and card sorts.

Qualitative data

Qualitative data is basically every other sort of information that you can collect but not necessarily measure. These pieces of information tend to provide descriptions and contexts, and are often used to describe why things are happening.

Qualitative data needs to be interpreted by the researcher and the team and doesn’t have a precise, indisputable outcome. For instance, you might hear people talk about valuing certain traits and note that as a key takeaway, but you can’t numerically measure or compare different participant’s values. You don’t need to include nearly as many sessions or participants in a qualitative study.

Common UX research methods that can provide qualitative data are usability tests, interviews, diary studies, focus groups, and participatory design sessions.

Some methods can produce multiple types of data. For instance, in a usability study, you might measure things like how long it took someone to complete a task, which is quantitative data, but also make observations about what frustrated them, which is qualitative data. In general, quantitative data will help you understand what is going on, and qualitative data will give you more context about why things are happening and how to move forward or serve better.

Behavioural vs attitudinal data

There is also a distinction between the types of research where you observe people directly to see what they do, and the type where you ask for people’s opinions.

Any direct-observation method is known as behavioural research. Ethnographic studies, usability tests, a/b tests, and eye tracking are all examples of methods that measure actions. Behavioral research is often thought of as the holy grail in UX research, because we know that people are exceptionally bad at predicting and accurately representing their own behaviour. Direct observation can give you the most authentic sense of what people really do and where they get stuck.

By contrast, attitudinal research like surveys, interviews, and focus groups asks for self-reported information from participants. These methods can be helpful to understand stated beliefs, expectations, and perceptions. For instance, you might interview users and find that they all wish they could integrate your tool with another tool they use, which isn’t necessarily an insight you’d glean from observing them to perform tasks in your tool.

It’s also common to both observe behaviour and ask for self-reported feedback within a single session, meaning that you can get both sorts of data, which is likely to be useful regardless of your open question.

Other considerations

Even after you’ve chosen a specific research method, there are a few more things you may need to consider when planning your research methods.

Where to conduct

It’s often ideal to be able to perform research in the context of how a person normally would use your product, so you can see how your product fits into their life and observe things that might affect their usage, like interruptions or specific conditions.

For instance, if you’re working on a traffic prediction application, it might be really important to have people test the app while on their commute at rush hour rather than sitting in a lab in the middle of the day. I recently did some work for employees of a cruise line, and there would have been no way to know how the app really behaved until we were out at sea with satellite internet and rolling waves!

Context for research is important. If you can, get as close as possible to a real scenario of when someone would use your product.

You might have the opportunity to bring someone to a lab setting, meet them in a neutral location, or even intercept them in a public setting, like a coffee shop.

You may also decide to conduct sessions remotely, meaning that you and the participant are not in the same location. This can be especially useful if you need to reach a broad set of users and don’t have travel budget or have an especially quick turnaround time.

There is no absolute right or wrong answer about where the sessions should occur, but it’s important to think through how the location might affect the quality of your research and adjust as much as you can.

Moderation

Regardless of where the session takes place, many methods are traditionally moderated, meaning that a researcher is present during the session to lead the conversation, set tasks, and dig deeper into interesting conversation points. You can tend to get the richest, deepest data with moderated studies. But these can be time-consuming and require a good deal of practice to do effectively.

You can also collect data when you aren’t present, which is known as unmoderated research. There are traditional unmoderated methods like surveys, and variations of traditional methods, like usability tests, where you set tasks for users to perform on their own and ask them to record their screen and voice.

Unmoderated research takes a bit more careful planning because you need to be especially clear and conscious of asking neutral questions, but you can often conduct them faster, cheaper, and with a broader audience traditionally moderated methods. Whenever you do unmoderated research, I strongly suggest doing a pilot round and getting feedback from teammates to ensure that instructions are clear.

Research methods

Once you’ve thought through what stage of the product you’re in, what your key research goals are, what kind of data you need to collect to answer your questions, and other considerations, you can pinpoint a method that will serve your needs. I’ll go through a list of common research methods and their most common usages.

Usability tests: consist of asking a participant to conduct common tasks within a system or prototype and share their thoughts as they do so. A researcher often observes and asks follow up questions.

Common usages: Evaluating how well a solution works and identifying areas to improve.

UX interview: a conversation between a researcher and a participant, where the researcher usually looking to dig deep into a particular topic. The participant can be a potential end user, a business stakeholder or teammate.

Common usages: Learning basics of people’s needs, wants, areas of concern, pain points, motivations, and initial reactions.

Focus groups: similar to interviews, but occur with multiple participants and one researcher. Moderators need to be aware of potential group dynamics dominating the conversation, and these sessions tend to include more divergent and convergent activities to draw out each individual’s viewpoints.

Common usages: Similar to interviews in learning basics of people’s needs, wants, areas of concern, pain points, motivations, and initial reactions. May also be used to understand social dynamics of a group.

Surveys: lists of questions that can be used to gather any type of attitudinal behaviour.

Common usages: Attempting to define or verify scale of outlook among larger group

Diary study: a longitudinal method that asks participants to document their activities, interactions or attitudes over a set period of time. For instance, you might ask someone to answer three questions about the apps they use while they commute every day.

Common usages: Understanding the details of how people use something in the context of their real life.

Card sortsa way to help you see how people group and categorise information. You can either provide existing categories and have users sort the elements into those groupings or participants can create their own.

Common usages: Help inform information architecture and navigation structures.

Tree tests: the opposite of card sorts, wherein you provide participants with a proposed structure and ask them to find individual elements within the structure.

Common usages: Help assess a proposed navigation and information architecture structure.

A/B testing: Providing different solutions to audiences and measuring their actions to see which better hits your goals.

Common usages: Assess which of two solutions performs better.

Christian Rohrer and Susan Farrell also have great cheat sheets of best times to employ different UX research methods.

Wrapping up

To get the most out of UX research, you need to consider your project stage, objectives, the type of data that will answer your questions, and where you want to conduct your research.

As with most things in UX, there is no one right answer for every situation, but after reading this article you’re well on your way to successfully conducting UX research.

Want to dive deeper into UX research methods? Try Amanda’s latest course, Recruiting and Screening UX Research Participants on Skillshare with 2 months’ free access.  

The post Choosing the Right UX Research Method appeared first on UX Mastery.

]]>
https://uxmastery.com/choosing-right-ux-research-method/feed/ 5 63885
Don’t Just Satisfy Your Users, Love Them https://uxmastery.com/dont-just-satisfy-users-love/ https://uxmastery.com/dont-just-satisfy-users-love/#respond Mon, 08 Jan 2018 23:00:54 +0000 http://uxmastery.com/?p=63449 When you think about the people you love, you want the very best for them. You want to make things delightful and keep them magical. As designers, we can leverage this way of thinking to provide more immersive, engaging experiences for our users.

The post Don’t Just Satisfy Your Users, Love Them appeared first on UX Mastery.

]]>
A while ago, I was driving into work listening to the Design Story Podcast, when I heard Mauro Porcini, Chief Design Officer at PepsiCo, talking about not just satisfying our users, but loving them.

This really resonated with me because I’ve been thinking of a way to explain the importance of going beyond just having empathy for users—especially because designers often talk about empathy but then proclaim that they are here to solve your (the user’s) problems.

Having just started a new role, I’m working on creating design principles with my team as a way to align and communicate our fundamental team beliefs. The idea of ‘loving’ users was one of the principles we instantly agreed upon.

When you think about the people you love, you want the very best for them. You want to make things delightful and keep them magical. There is great joy in spending time with those you love, the relationship involves an element of surprise, and sharing experiences to build understanding is key. Love has a far greater emotional connection than empathy; as designers, we can leverage this way of thinking to provide more immersive, engaging experiences for our users.

As we spend time with users, observing them with intent can help us identify their pain points, goals and desired outcomes. Taking time to know them and build relationships uncovers their unarticulated needs. Understanding the reasoning why, beyond just knowing the what, provides an opportunity to truly delight users—more than fulfilling a single need—and involving them throughout the process cultivates a strong, authentic relationship.

Designers still need to be grounded in the business and avoid any impression they spend more time advocating for the user than learning and understanding the goals of the business. Loving the user also means being transparent about business constraints—it means making users aware of business realities that may prevent some of their needs from being solved, or even prevent them form appearing on the roadmap altogether. It’s up to us to explain how solving specific user needs and providing an emotional experience will translate to exceeding business goals.

In the podcast, Mauro said: “As designers, if we make the people we design for feel the love, then we will receive the love back, and our business will benefit from this big-time.” As we look ahead to 2018, I challenge you to find new ways to keep the magic alive for your users, so they feel the love.

The post Don’t Just Satisfy Your Users, Love Them appeared first on UX Mastery.

]]>
https://uxmastery.com/dont-just-satisfy-users-love/feed/ 0 63449
Getting Started with Popular Guerrilla UX Research Methods https://uxmastery.com/popular-guerrilla-ux-research-methods/ https://uxmastery.com/popular-guerrilla-ux-research-methods/#respond Fri, 03 Nov 2017 02:41:37 +0000 http://uxmastery.com/?p=61865 Amanda's last article covered how to “guerilla-ise” traditional UX research methods to fit into a short timeline, and when it makes the most sense to use them.

Now, she's back to walk us through some of the most popular guerilla methods—live intercepts, remote and unmoderated studies, and using low fidelity prototypes. She covers pros, cons and tips to make sure you make the most of your guerilla research sessions.

The post Getting Started with Popular Guerrilla UX Research Methods appeared first on UX Mastery.

]]>
In my last article, I talked about how you can “guerilla-ise” traditional UX research methods to fit into a short timeline, and when it makes the most sense to use them. Read the post here.

This time, I’ll walk you through some of the most popular guerilla UX research methods: live intercepts, remote and unmoderated studies, and using low fidelity prototypes.

I’ll cover pros, cons and tips to make sure you make the most of your guerilla research sessions.

Conducting research in public

Often the go-to guerilla technique is to skip the formal participant recruitment process and ask members of the public to take part in your research sessions. Live intercepts are often used as shortened versions of usability tests or interviews.

Getting started

Setting up is easy—all you need is a public space where you can start asking people for a few minutes to give you feedback. A cafe or shopping centre usually works well. 

This is a great way to get lots of feedback quickly, but approaching people takes a little courage and getting used to. 

I find it helps to put up a sign that publicises the incentive you’re offering, and if possible, identifying information like a company logo. This small bit of credibility makes people feel more comfortable.

Make sure you have a script prepared for approaching people. You don’t need to stick to it every time, but make sure you mention where you work or who your client is, your goal is, their time commitment and their compensation.

Try something like:

Hi, I’m [firstname] and I’m working for [x company] today. We’re trying to get some feedback on [our new feature]. If you have about [x minutes] to chat, I can offer you a [gift card/incentive].

Be sure to be friendly, but not pushy. Give people the chance to opt out or come back later. Pro tip: I always take a piece of paper with time slots printed so that people can sign up for a later time.  

The location you choose has a major impact on how many people you talk to and the quality of your results. Here are some tips for picking a good spot:

  • Pick a public place where there will be a high volume of people and make sure you get permission to be there. Aim to be visible but not in the way. A table next to the entrance works well.
  • Try to pick a place that you think your target audience will be. For instance, if you’re interested in talking to lawyers, pick a coffee shop near a big law office.
  • Look for stable wi-fi and plentiful wall plugs.
  • Regardless of where you choose, stake out the location ahead of the research session so you can plan accordingly.

A few limitations

There’s no doubt that intercepting people in public is a great way to get a high volume of participants quickly. Talking to the general population, however, is best reserved for situations when you have a product or service that doesn’t require specific knowledge, contexts, or outlooks.

If you’re doing a usability test, you could argue that whatever you build should be easy enough for anyone to figure out, so you can still get feedback. Just be aware that you may miss out on valuable insights that are specific to your target audience.

Let’s say you’re working on a piece of tax software. A risk is that you end up talking to someone who has a spouse that handles all the finances, or miss finding a labelling error that only tax accountants would know to report.

To avoid this, I always recommend asking a few identifying questions at the beginning of each session so you can analyse results appropriately. You don’t always need to screen people out, but you can choose how to prioritise their feedback in the analysis stage.

Context also matters. If you usability test a rideshare app on a laptop in a coffee shop, but most people will use the app on their phones on a crowded street, you may get misleading feedback.

Watch for bias when user-testing in a cafe. Photo via Unsplash

You should also be aware that you may run into bias by intercepting all your participants from one location. Think about it: the people that are visiting an upscale coffee shop in a business centre on a weekday are likely to be pretty different than the people who are stopping at a gas station for coffee in the middle of the night. Again, try to choose your intercept location based on your target audience and consider going to a few locations to get variety.

Keep in mind that only a certain type of person is going to respond positively and take the time to give you feedback. Most people will be caught off guard, and may be suspicious or unsure what to expect. You won’t have much time to give participants context or build rapport, so be especially conscious of making them feel comfortable.

Some final tips:

  • Set expectations clearly. Tell participants right away how long you’ll talk to them and how you’ll compensate them for their time. Be clear about what questions you’ll ask or tasks you’ll present and what they need to do.
  • Pay extra attention to participant comfort. Give them the option to leave at any time and put extra emphasis on the fact that you’re there to gather feedback, not judge them or their abilities. Try to record the sessions or not take notes the whole time, so you can make eye contact and read body language.
  • Remember standard rules of research: don’t lead participants, get comfortable with silence, and ask questions that participants can easily answer. Be extra careful asking about sensitive topics such as health or money. In fact, I don’t recommend intercepting people if you need to talk about very sensitive topics.

Remote and unmoderated studies

Taking the researcher out of the session is another proven way to reduce the time and cost of research. This is achieved through running remote and unmoderated research sessions.

Getting started

Traditional research assumes that a researcher is directly conducting sessions with participants, or moderating the sessions. Unmoderated research just means that the participants respond without the researcher present. Common methods include diary studies, surveys or trying out predetermined tasks in a prototype.

The core benefit is that people can participate simultaneously so you can collect many responses in a short amount of time. It’s often easier to recruit too, because there are no geographic limitations and participants don’t have to be available at a specific time.

You plan unmoderated research much like you do moderated research: set your research goal, select an appropriate method to answer your open questions, determine participants, and craft your research plan. The difference in unmoderated sessions is that you need to be especially careful about setting expectations and providing clear directions, because you won’t be there during the session. Trial runs are especially important in unmoderated sessions to catch unclear wording and confusing tasks.

You can also conduct remote research, which means that you’re not physically in the same place as your participant. You can use video conferencing tools to see each other’s faces and share screens. Remote sessions are planned in a similar vein to in-person sessions, but you can often reach a broader set of people when there are no geographic limits.

A few limitations

Any time you conduct sessions remotely or choose unmoderated methods, you run the risk of missing out on observing context or reading body language. With unmoderated sessions, can’t dig deeper when someone has an interesting piece of feedback. That’s still better than not collecting data, but you should take it into consideration when you’re analysing your data and making conclusions.

Low fidelity prototypes

If you want to invest less effort upfront, and iterate quickly, low fidelity prototypes are a good option.

In this scenario, you forego fully functional prototypes or live sites/applications and instead use digitally linked wireframes or static images.

You can even use paper prototypes, where you sketch a screen on paper and simulate the interaction by switching out which piece of paper is shown.

Getting started

Low fidelity prototypes, especially paper, are less time consuming to make than digital prototypes, which makes them inexpensive to produce and easy to iterate. This sort of rapid cycling is especially useful when you’re in the very early conceptual stages and trying to sort out gut reactions.

You run a usability test with a low fidelity prototype just like you would run any other usability test. You come up with tasks and scenarios that cover your key questions, recruit participants, and observe as people perform those tasks.

A few limitations

For this guerrilla technique, you have to be especially careful to ask participants to think aloud and not lead or bias them, because there can be a huge gap in their expectations and yours. For paper prototypes in particular, a moderator must be present to simulate the interactions. I recommend in-person sessions for any sort of test with low fidelity prototypes.

Keep in mind that you can get false feedback from low-fidelity wireframe testing. It can be difficult for participants to imagine what would really happen, and they may get stuck on particular elements or give falsely positive feedback based on what they imagine. Take this into consideration when analysing the results, and be sure that you conduct multiple rounds of iterative research and include high-fidelity prototypes or full beta tests in your long-term research plan.

Wrapping up

When in doubt about the results of any guerilla research test, I recommend running another study to see if you get the same results.

You can execute the exact same test plan, or even try to answer the same question with a complementary method. If you arrive at similar conclusions, you can feel more confident, and if not, you’ll know that you need to keep digging. When you’re researching guerilla style, you can always find more time to head back to the jungle for more sessions.

Take a look at my article linked below for tips on reducing scope, and the best times to use guerilla methods. Happy researching!

Further reading

The post Getting Started with Popular Guerrilla UX Research Methods appeared first on UX Mastery.

]]>
https://uxmastery.com/popular-guerrilla-ux-research-methods/feed/ 0 61865
Going Guerrilla: How to Fit UX Research into Any Timeframe https://uxmastery.com/guerrilla-ux-research/ https://uxmastery.com/guerrilla-ux-research/#respond Thu, 19 Oct 2017 04:51:29 +0000 http://uxmastery.com/?p=61304 As more and more companies realise the value of UX research, “guerilla” methods have become a popular way to squeeze research into limited budgets and short timelines. This often means reducing scope and/or rigor. The key to successful guerilla research is to strike the right balance to hit time and budget goals, but still be rigorous enough to gather valuable feedback.

So when is the best time to tackle your research guerilla style?

The post Going Guerrilla: How to Fit UX Research into Any Timeframe appeared first on UX Mastery.

]]>
As more and more companies realise the value of UX research, “guerrilla” methods have become a popular way to squeeze research into limited budgets and short timelines. Those of us working in agile sprints often have even less dedicated time for research.

When I say guerrilla research, I don’t mean go bananas or conduct jungle warfare research. Guerrilla research is really just a way to say that you’ve taken a regular UX research method and altered it to reduce time and cost.

To do so, you often end up reducing scope and/or rigour. The key to successful guerrilla research is to strike the right balance to hit time and budget goals, but still be rigorous enough to gather valuable feedback.

Read on for a framework for reducing any research method and an overview of the best time to use guerrilla tactics.

If you’re looking for practical advice on using guerilla research methods, take a look at my second article: Getting Started with Popular Guerrilla UX Research Methods

Crafting your guerilla plan

You can “guerrilla-ise” any UX research method, and there’s almost never one single correct way to do so. That said, qualitative techniques like usability tests and interviews lend themselves especially well to guerrilla-isation.

The easiest way I’ve found to plan guerrilla research is to start by determining how you’d do the research if you had desired time and budget. Then work backwards to find the elements you can toggle to make it work for the situation. The first place I look to cut is scope of the research question.

Let’s say your team is working on a new healthcare application and wants to assess the usability of the entire onboarding process. That’s an excellent goal, but pretty broad. Perhaps you could focus your study just on the first few steps of the signup process, but not the follow-up tutorial, or vice versa.

Once you’ve narrowed down your key research goals, you can start looking at what sorts of methods will answer your questions. The process for choosing a research method is the same, regardless of whether you’re trying to go guerrilla or not. For a great summary of choosing a method, take a look at Christian Rohrer’s excellent summary on NNG’s blog or this UX planet article.

Besides narrowing the scope of your research goal, think about the details that make up a study. This includes questions such as:

  • What do you need to build or demonstrate?
  • How many sessions or participants do you need?
  • How will you recruit them?
  • What’s the context of the studies?

Then you can take a look at all those elements, identify where your biggest time and money costs are, and prioritise elements to shift.

Reducing scope

Let’s say, for example, that you determine the ideal way to test the onboarding flow of your new app is to conduct 10 one-hour usability sessions of the fully functional prototype. The tests will take place in a lab and you’ll have a participant-recruitment firm find participants that represent your main persona.

There are many ways you could shift to reduce time and costs in this example.

You could:

  • Run test sessions remotely instead of in a lab
  • Reduce the number of sessions overall
  • Run unmoderated studies
  • Build a simpler wireframe or paper prototype
  • Recruit participants on social media
  • Intercept people in a public location
  • Or a combination of these methods

To decide what to alter, look at what will have the biggest impact on time, budget, and validity of your results.

For example, if working with a recruiting firm will be time consuming and expensive, you’ll want to look for alternative ways to recruit. Intercepting people in public is what many of us envision when we think of guerrilla research. You could do that, or you could also find participants on social media or live-intercept them from a site or web app.

You may even decide to combine multiple guerilla-ising techniques, such as conducting fewer sessions and doing so remotely, or showing a simple prototype to people who you intercept.

Just remember, you don’t want to reduce time and effort so much that you bias your results. For instance, if you’re doing shorter sessions or recruiting informally, you may want to keep the same overall number of sessions so you have a reasonable sample size.

Best uses for guerrilla research

So, when is the best scenario to use guerrilla tactics in your research?

  • You have a general consumer-facing product which requires no previous experience or specialty knowledge OR you can easily recruit your target participants
  • You want to gather general first-impressions and see if people understand your product’s value
  • You want to see if people can perform very specific tasks without prior knowledge
  • You can get some value out of the sessions and the alternative is no research at all

And when should you avoid guerrilla methods?

  • When you’ll be researching sensitive topics such as health, money, sex, or relationships
  • When you need participants to have very specific domain knowledge
  • When the context in which someone will use your product will greatly impact their usage and you can’t talk to people in context
  • When you have the time or budget to do more rigorous research!

Guerrilla research is a great way to fit investigation into any timeframe or budget. One of its real beauties is that you can conduct multiple, iterative rounds of research to ensure you’re building the right things and doing so well.

If you have the luxury of conducting more rigorous research, take advantage, but know that guerrilla research is always a better option than no research at all.

Read the next article on getting started with common guerrilla techniques.

The post Going Guerrilla: How to Fit UX Research into Any Timeframe appeared first on UX Mastery.

]]>
https://uxmastery.com/guerrilla-ux-research/feed/ 0 61304
The Space Between Iterations https://uxmastery.com/the-space-between-iterations/ https://uxmastery.com/the-space-between-iterations/#respond Tue, 06 Jun 2017 07:20:19 +0000 http://uxmastery.com/?p=54438 The most important decisions made about any product often take place between iterations. You could argue that the timeframe between identifying key research findings and understanding what the next iteration will be is the most crucial to the future success of the product. Andy Vitale, UX Design Principal at 3M, talks us through his iterative approach to research.

The post The Space Between Iterations appeared first on UX Mastery.

]]>

The most important decisions made about any product often take place between iterations. You could argue that the timeframe between identifying key research findings and understanding what the next iteration will be is the most crucial to the future success of the product.

There are many activities that take place during this phase and even before it begins – not just by research or design teams, but stakeholders, developers and customers as well. Clear communication and collaboration is the primary driver for gaining overall alignment among decision makers as quickly as possible.

At 3M, we’re fortunate to have access to many customers, allowing us to take the iterative approach to research outlined in this article. Depending on the project, timeline and business realities, coordinating customer visits and travel usually takes place over the course of several weeks.

How researchers and designers collaborate

While initial research sessions are observational and focused around contextual inquiry, it doesn’t mean that design is on hold. It’s beneficial to have at least one designer and one stakeholder from the business attend the research sessions, preferably on-site, so there’s shared learning. Team members debrief after each visit and input findings into a shared document so that it’s accessible to everyone on the team. This prevents the possibility of forgetting key points or confusion between what users may have said or team members may have heard.

Start sorting and analysing research as soon as you can.

Aware of the need to identify trends, but cognisant of time between research visits, it’s important that researchers start to analyse and organise findings while designers start to explore potential solutions via sketches, moodboards and other design activities. Throughout this quick sketching process, the team should involve stakeholders and subject matter experts to ensure the accuracy of what is presented.

For us, this sketching process can sometimes happen in a hotel lobby, producing a sketch that we’ll share with customers. Customers appreciate the low fidelity of the sketches because it allows them to be involved in early validation and provides them with the opportunity to offer feedback. The accuracy of the content is important so that users aren’t distracted by missing data, and can focus on the intended functionality of the concept sketches.

This iteration cycle typically continues throughout the research phase, with the fidelity of designs increasing as research is analysed, revealing further insights on the behavioural trends of users.

Keeping teams aligned

Communication between designers and the cross-functional teams of stakeholders and developers is essential throughout the process to ensure the decisions made align with business goals, technical capabilities, and customer needs. Once there is alignment, it’s time to conduct more formal user testing (which we often do remotely), with the customers we visited. This user testing should also be iterative, with the prototype increasing in robustness each week.

This should all be part of an agile or design sprint process but sometimes, depending on the complexity of the problems to be solved and the bandwidth of the team, there may not be a designer embedded within individual scrum teams. If this is the case, and the team is focused on validating larger solutions as opposed to smaller features, it’s best to facilitate a workshop with the development teams and product owners to plan the agile implementation of the new design. As specific pieces of functionality are validated throughout the process the design team works with developers to prioritise and support their efforts.

Since software is iterative, the cycle continues. Once the features are launched and the results are measured, it’s time to assess the business and user needs and begin the process of working towards the next release iteration.

The post The Space Between Iterations appeared first on UX Mastery.

]]>
https://uxmastery.com/the-space-between-iterations/feed/ 0 54438
How to Turn UX Research Into Results https://uxmastery.com/how-to-turn-ux-research-into-results/ https://uxmastery.com/how-to-turn-ux-research-into-results/#comments Wed, 31 May 2017 00:00:37 +0000 http://uxmastery.com/?p=54493 We’ve all known researchers who “throw their results over the fence” and hope their recommendations will get implemented, with little result. Talk about futility! Luckily, with a little preparation, it’s a straightforward process to turn your research insights into real results.

The post How to Turn UX Research Into Results appeared first on UX Mastery.

]]>
We’ve all known researchers who “throw their results over the fence” and hope their recommendations will get implemented, with little result. Talk about futility! Luckily, with a little preparation, it’s a straightforward process to turn your research insights into real results.

To move from your research findings to product changes, you should set yourself two main goals.

First, to effectively communicate your findings to help your audience process them and focus on next steps.

Secondly, to follow through by proactively working with stakeholders to decide which issues will be addressed and by whom, injecting yourself into the design process whenever possible. This follow-through is critical to your success.

Let’s look at an end-to-end process for embracing these two main goals.

Effectively communicating your findings

Finding focus

When you have important study results, it’s exciting to share the results with your team and stakeholders. Most likely, you’ll be presenting a lot of information, which means it could take them a while to process it and figure out how to proceed. If your audience gets lost in details, there’s a high risk they’ll tune out.

The more you can help them focus and stay engaged, the more likely you are to get results. You might even consider having a designer or product owner work with you on the presentation to help ensure your results are presented effectively – especially if your associates were involved in the research process.

Engaging with your colleagues and stakeholders

You should plan to present your results in person – whether it’s a casual or formal setting – rather than simply writing up a report and sending it around. This way, your co-workers are more likely to absorb and address your findings.

You could present formally to your company’s leadership team if the research will inform a key business decision. Or gather around a computer with your agile teammates to share results that inform specific design iterations. Either way, if you’re presenting – especially if you allow for questions and discussion – you’re engaging with your audience. Your points are getting across and design decisions will be informed.

Why presentations matter

Here are a few ways your presentation can help your team focus on what to do with the findings:

  • Prioritise your findings (Critical, High, Medium, Low). This helps your audience focus on what’s most important and chunk what should be done first, second and so on. An issue that causes someone to fail at an important task, for example, would be rated as critical. On the other hand, a cosmetic issue or a spelling issue would be considered minor. Take both the severity and frequency of the issue into consideration when rating them. Remember to define your rating scale. Usability.gov has a good example. Other options are to use a three-question process diagram, a UX integration matrix (great for agile), or the simple but effective MoSCoW method.  
  • Develop empathy by sharing stories. We love to hear stories, and admire those among us who can tell the best ones. In the sterile, fact-filled workplace, stories can inspire, illuminate and help us empathise with those we’re designing for. Share the journeys your participants experienced, the challenges they need to overcome. Use a sprinkling of drama to illustrate the stakes involved; understanding the implications will help moderate the conversations and support UX decisions moving forward.
  • Illustrate consequences and benefits. Your leadership team will be interested if they know they will lose money, customers, or both if they don’t address certain design issues. Be as concrete as you can, using numbers from analytics and online studies when possible to make points. For example, you might be able to use analytics to show users getting to a key page, and then dropping off. This is even more effective if you can show via an online study that one version of a button, for example, is effective all the time, whereas the other one is not understood.
  • Provide design recommendations. Try to strike a balance between too vague and too prescriptive. You want your recommendations to be specific and offer guidance about how an interaction should be designed, without actually designing it. For example, you could say “consider changing the link label to match users’ expectations” or “consider making the next step in the process more obvious from this screen.” These are specific enough to give direction and serve as a jumping off point for designers.
  • Suggest next steps. It can help stakeholders to see this in writing, especially if they’re not used to working with a UX team. For example:
    • Meet to review and prioritise the findings.
    • Schedule the work to be done.
    • Assign the work to designers.

Presentations are an important first step, but your job as a researcher doesn’t end there. Consider your presentation an introduction to the issues that were found, and a jumping-off point for informing design plans.

The proactive follow through

You’ve communicated the issues. Now it’s time to dig in and get results.

Getting your priorities straight

Start by scheduling a discussion with your product manager – and possibly a representative each from the development and design teams – to prioritise the results, and put them on the product roadmap. It can be useful to take your user research findings – especially from a larger study – and group them together into themes, or projects.

Next, rate the projects on a grid with two axes. For example:

  • how much of a problem it is for customers could display vertically; and
  • how much effort it would be to design or redesign it (small, medium and large) could display horizontally.

Placing cards or sticky notes that represent the projects along these axes helps you see which work would yield the most value

Then compare this mapping to what’s currently on the product roadmap and determine where your latest projects fit into the overall plans. Consider that it often makes more sense to fix what’s broken in the existing product – especially if there are big problems – than to work on building new features. Conducting this and additional planning efforts together will ensure everyone is on the same page.

Working with your design team

Once it’s time for design work, participate in workshops and other design activities to represent the product’s users and ensure their needs are understood. In addition to contributing to the activities at hand, your role is to keep users’ goals and design issues top of mind.

Since the focus of the workshop – or any design activity – early on is solving design problems, it could be useful to post the design problems and/or goals around the room, along with user quotes and stories. A few copies of complete study findings in the room, plus any persona descriptions, are useful references. The workshop to address design problems could be handled several ways – storyboarding solutions, drawing and discussing mockups, brainstorming. But the goal is to agree on problems you’re trying to solve, and come up with possible solutions to solve them.

As the design team comes up with solutions, remember to iteratively test them with users. It’s useful for designers to get regular feedback to determine whether they’re improving their designs, and to get answers to new design questions that arise throughout the process. All of this helps designers understand users and their issues and concerns.

Achieving your end game

One key to getting your results implemented is simply remembering to consider stakeholders’ goals and big picture success throughout the research and design process. The best way to do this is to include them in the research planning – and in the research observations – to make sure you’re addressing their concerns all along. When presenting, explain how the results you are suggesting will help them meet their design and business goals.

Always remember that as the researcher you hold knowledge about your users that others don’t. Representing them from the presentation through the next design iteration is one key to your product’s success.

How do you make sure your hard-won research insights makes it through to design? Leave a comment or share in our forums.

Catch up with more of our latest posts on UX research:

The post How to Turn UX Research Into Results appeared first on UX Mastery.

]]>
https://uxmastery.com/how-to-turn-ux-research-into-results/feed/ 1 54493
Pivot or Persevere? Find Out Using Lean Experiments https://uxmastery.com/pivot-or-persevere-find-out-using-lean-experiments/ https://uxmastery.com/pivot-or-persevere-find-out-using-lean-experiments/#respond Wed, 24 May 2017 13:43:38 +0000 http://uxmastery.com/?p=54333 The Lean Startup approach is gaining popularity in organisations of all sizes, which means teams must adapt their processes. More and more, UX professionals are being asked to take on Lean experiments - which are fantastic - but differ slightly from traditional UX research. This guide will help you get the most out of your experimentation cycles and understand whether you should pivot or persevere with your MVP.

The post Pivot or Persevere? Find Out Using Lean Experiments appeared first on UX Mastery.

]]>
The Lean Startup approach is gaining popularity in organisations of all sizes, which means teams must adapt their processes. More and more, UX professionals are being asked to take on Lean experiments – which are fantastic – but differ slightly from traditional UX research.

To recap, “Lean Startup” is a business approach that calls for rapid experimentation to reduce the risks of building something new. The framework has roots in the Lean Manufacturing methodology and mirrors the scientific method. It calls for very UX-friendly processes, such as collecting iterative feedback and focusing on empirical measurement of performance indicators.

One of the core principles is to iterate through a cycle known as Build-Measure-Learn, which includes building a minimum viable product (MVP) to test, measure what happens, and then decide whether to move forward with the suggested solution (persevere) or find another (pivot).

Simple in theory. But it can be challenging to figure out what MVP to build, how to interpret the data collected and what next steps should be after completing a lean experiment. These guidelines will help you get the most out of your experimentation cycles and understand whether you should pivot or persevere.

Consider the context

The most important part of data analysis starts before you’ve gathered any data. To help you decide what type of research to do, you first need to consider where you are in the progress of your product, what information you already have, and what are the biggest, riskiest open questions.

In the conceptual stages of a totally new business or feature idea, you first need to understand enough about your potential user base and their needs to make informed hypotheses about the problems they have and how you might be able to address them. Any idea for a new thing is an assumption, and doing some generative research will help you shape and prioritise your assumptions.

The Lean Startup approach advocates for starting with a process called GOOB – Getting Out Of the Building – and looks a whole lot like a condensed version of traditional ethnography and interviews. The goal is to talk to a small number of people who you think fit your target audience and understand their current needs, experience gaps, pain points, and methods for solving existing problems related to your idea.

Run these interviews just like any other UX interview and use the data to create a list of assumptions about your target users, potential problems to solve, and ways you could address those problems.  Start with a period of exploration and learning before you build anything.

Prioritising what to explore

Your list of assumptions can serve as your backlog of work. Rather than creating a list of necessary features to build, treat each item in the list as a separate hypothesis to explore and either prove or disprove. Then, prioritise the hypotheses that are the riskiest, or would have the biggest impact if your assumption is wrong. Assumptions about what the problem is and for what people should be higher in priority over assumptions about how to solve any problems or build any features.

Typical assumptions might look something like this:

I believe [___] set of people are facing [___] challenge.

I believe [___] solution could help address [___] problem better than my users’ current workaround.

I believe [___] solution could generate money in [___] way.

For instance, let’s say that you’re trying to create a new application to help busy parents plan meals. You’ve interviewed a dozen busy parents and have some insight that says the two biggest issues they face are deciding what to cook and finding time to buy all the ingredients/groceries.You might have a hunch about which direction to go, but your first test should be centred around figuring out which of these issues is more compelling to your users.

Setting hypotheses

The next step is to craft a precise hypothesis that will make it very easy to tell whether you’ve proved or disproved your assumption.

I like to use the following framework for creating hypotheses:

The do, build, provide section to refers to the solution. This could be as high-level as deciding which type of app to build, or as specific as the type of interaction to develop for a particular interface.

These people should represent your assumed customer archetypes, derived from your initial interviews and other data.

The desirable outcome should be something that correlates to business success, such as sending a message or ordering an item. Keep in mind that it’s easy to come up with outcomes that look good, but don’t really tell you anything. These are called vanity metrics. For instance, if I want people to make a purchase on an ecommerce site, it’s not really that helpful to know how many people decided to follow us on Facebook. Instead, focus on identifying the pieces of information that help you make a decision and that give you a true indication of business success.

The actionable metric is whatever will tell you that your investment into building this item will be worth it. Actionable metrics can be a little tricky, especially early on, but I like to try to set these metrics as the barometers of the minimum amount of success you need to prove that the investment will be worthwhile. You can look at both perceived cost of investment and perceived value to gauge this.

Let’s say you work at an ecommerce company and you’re proposing a new feature that you hope will increase last-minute additions to a cart. You could ask the development team to estimate how much effort it would take to build out the feature, then work backward from that cost to see how much the average order size would have to increase to offset the costs.

If the team estimates something would take about 5 weeks and will cost $25,000, you’ll need the change to make at least that much money in that amount of time. So then let’s say you also know that the company usually has 1,000 sales a week and the average order size is $20. That means that right now, the company makes $20,000 a week. In order to offset the $25,000 estimated development dollars over 5 weeks, the change you make would have to bring in an extra $5,000 per week. This means that your average order size would have to go up $5 to $25. All the additional money earned after the offset is additional profit for the company.

That was all a lot of math, and you don’t always have that much information at your fingertips, especially when you’re very early on in the product development process. You might have to just make an educated guess about what sort of number would be “good enough.” The point is to try to pick a metric that will truly help inform you about whether or not you should invest in the new change or not.

Sometimes it’s easier to conceptualise this as a fail condition, or point at which it wouldn’t be worth moving forward. In other words, you can frame it as: “if we don’t make at least x% more on each order after, we won’t implement the full version of the feature.” Then you can work backwards to craft a testable hypothesis.

Of course, this framework can be adjusted as needed, but you need to clearly define the exact question you’re exploring and what success looks like. If you can’t come up with a clear hypothesis statement, go back and re-evaluate your assumption and narrow it down so you can run a successful experiment.

Design your experiment

Once you have a clear single question to answer and hypothesis, deciding what sort of experiment to run should be fairly straightforward.

Let’s revisit the meal planning application example. Say that you’ve decided your riskiest assumption is which of the two core problems is more compelling to users.

A hypothesis might look something like this:

If we build an app that automatically generates 5 recipe ideas per week,

Then busy parents,

Will be interested in downloading this application.

We’ll know this is true when we present them with a variety of food-related apps and they choose the recipe generation app at least 15 percentage points more often, for example, than any other choice.

Now you can focus on designing a way to test which apps a user would be most interested in using. There is no one exact way to do this. You could create fake landing pages for each potential solution and see how many people sign up for each fake product, or create ads for the different apps and see which one generates most actions. You should focus on finding the smallest thing your team can build in order to test your hypothesis – the minimally viable product.

In this case, a good MVP might be a mockup of a page with blurbs of a few different fake applications you haven’t built yet. Then you could use a click-tool like usabilityhub to ask participants to choose a single app to help them with meal planning and then monitor how many clicks each concept gets. This way, you don’t even need to launch live landing pages or ad campaigns, just create the page mock-up.

Frequently used lean experiment types/MVPs include:

  • Landing page tests
  • Smoke tests such as explainer video, shadow feature, or coming soon pages
  • Concierge tests
  • Wizard of Oz tests
  • Ad tests
  • Click tests

These are just a few suggestions, and there are many more experiments you can run depending on your context and what you’re trying to learn. Use these suggestions as starting places not step-by-step directions for figuring out the right experiment for your team.

Analysing your results

If you’ve set a clear and concise hypothesis and run a well-designed experiment, it should be clear to see if you’ve proved or disproved your hypothesis.

Looking at the meal planning app example again, let’s say you ran the click test with 1,000. You included 4 app concepts in the test, and hypothesised that concept A would be the most compelling.

If Concept A receives 702 clicks, Concept B receives 98 clicks, Concept C receives 119 clicks, and Concept D receives 81 clicks, it’s very obvious that you proved your hypothesis. You can persevere, or move forward with concept A, and then focus on to testing your next set of assumptions exploring that concept. Maybe now is the time to tackle an assumption about the app’s core feature set.

On the other hand, if Concept A receives 45 clicks, Concept B receives 262 clicks, Concept C receives 112 clicks, and Concept D receives 581 clicks, you obviously disproved your hypothesis. Concept A is clearly not the most compelling concept and you should pivot away from that idea.

In this case, you also have a clear indication of the direction of your pivot – choice D is a clear winner. You could set your new assumption that concept D is a compelling direction and run another experiment to verify this assumption, perhaps by running a similar test to compare it against just one other concept or by setting up a landing page test. Or you could do more customer interviews to find out why people found that concept so compelling.

But what if Concept A receives 351 clicks, Concept B receives 298 clicks, Concept C receives 227 clicks, and Concept D receives 124 clicks? There’s no clear winner or direction. Did you set up a bad test? Are none of your concepts compelling? Or all of them? What next?

The short answer is that you don’t know. But the great thing about lean experiments is that the system is designed such that your next step should be running more experiments. In failing to find a winning direction, you succeeded in learning that your original assumption was incorrect, and you didn’t need to invest much to figure that out. You now know that you need to pivot, you just may not be sure in which direction.

Which way to pivot?

If you know that you need to pivot but are unsure what direction to take, my first suggestion is to run another related experiment to verify your initial findings.

In the food example, you could try a similar test with just 3 options and see if the outcomes change, or try running landing pages for all 4 concepts. While you don’t want to be falsely optimistic, you also want to be sure that there wasn’t something about the way you ran your test or a fluke in the data that is giving you a false impression. Since lean experiments are intentionally quick and not robust, they can sometimes lack the rigour to give you true confidence. If you have a true finding, you should be able to replicate results with another test.

If you run another test and get similarly inconclusive data or truly have no idea what direction to go next after running an experiment, try stepping away from lean experimentation and go back to exploratory research methods.

A successful pivot can be any kind of change in business and product model, such as a complete reposition to a new product or service, a single feature becoming the focus of a product, a new target user group, a change in platform or channel, or a new kind of revenue or marketing model. A structured experiment is not going to teach you what direction to go, so you need to do some broader, qualitative data gathering.

I recommend running interviews with two subsets of people. First, talk to people who love your product/service and are most often taking the option that you want, such as purchasing frequently, and find out what they love about you and why. Then, if possible, talk to the people who are not taking desired actions, to try and find out why, or what they’re looking for instead. These types of interviews will be just like any other discovery interview, and you’ll be looking for the participants to guide you to new insights that can lead to your next set of assumptions to test.

Conclusion

Lean experiments are a great way to get any organisation learning from their customers and poised to make valuable changes. Getting used to the ins and outs of setting clear hypotheses and learning whether to pivot or persevere can take some time, but luckily those of us in UX already have the skill sets to do so successfully. Go forth and experiment!

The post Pivot or Persevere? Find Out Using Lean Experiments appeared first on UX Mastery.

]]>
https://uxmastery.com/pivot-or-persevere-find-out-using-lean-experiments/feed/ 0 54333
Making an Impact with UX Research Insights https://uxmastery.com/making-an-impact-with-ux-research-insights/ https://uxmastery.com/making-an-impact-with-ux-research-insights/#comments Tue, 16 May 2017 07:01:47 +0000 http://uxmastery.com/?p=54091 You’ve completed your in-depth interviews, your contextual inquiry or your usability testing. What comes next? As UX practitioners know, when it comes to research, field work is only a fraction of the story.

How do you learn from mountains of data, and then ensure your insights create a tangible impact in shaping your product’s design? We couldn’t think of anyone more qualified to ask than the prolific Steve Portigal, user researcher extraordinaire.

The post Making an Impact with UX Research Insights appeared first on UX Mastery.

]]>
You’ve completed your in-depth interviews, your contextual inquiry or your usability testing. What comes next? As UX practitioners know, when it comes to research, field work is only a fraction of the story.

How do you learn from mountains of data, and then make sure your insights create a tangible impact in shaping your product’s design?

We couldn’t think of anyone more qualified to ask than the prolific Steve Portigal, user researcher extraordinaire. From analysis and synthesis through to framing your findings, Steve walks us through a few post-research considerations to keep top of mind for your next research project.

What tips do you have for converting insights from research into action?

It’s a lot of work. According to Cooper’s Jenea Hayes, it’s roughly two hours of analysis and synthesis for every hour of research. I get grumpy when people talk about coming back from a research setting with insights. Insights are the product of analysis and synthesis of multiple sessions. It may just me being semantic-pedantic, but there’s something off-putting about the perfunctory way people describe: “Oh I come back from the session and I write up my insights and there you go.”

I see two different stages in making sense of research. Step one is to collate all the debrief notes, the hallway conversations, the shower thoughts you’ve had following the experience of doing the research. It’s a necessary first step and it’s heavily skewed by what sticks in your mind. It produces some initial thoughts that you can share to take the temperature of the group.

The next step is to go back to the data (videos, transcripts, artefacts, whatever you have) and look at it fresh. You’ll always see something different happened than what you think, and that’s where the deeper learning comes from. It’s a big investment of time, and maybe not every research question merits it. But if you don’t go back to the data (and a lot of teams won’t do it, citing time pressure), you are leaving a lot of good stuff on the cutting room floor.

I’m also a big fan of keeping the activity of sense making (what is going on with people?) separate from the activity of actions (what should we about it?). You want to avoid jumping to a solution for as long as possible in the process, so that your solutions reflect as deep an understanding of the problem as possible. Set up a “parking lot” where you can dump solutions as they’ll come up anyway. Depending on your research question, work your way to a key set of conclusions about people’s behaviour. Based on those conclusions, explore a range of possible solutions.

In your analysis, how do you decide what’s important?

Take time at the beginning of the research to frame the problem. Where did this research initiate? What hypotheses – often implicit ones – do stakeholders have? What business decisions will be made as a result of this research?

What research reveals doesn’t always fit into the structure that is handed to you ahead of time, so knowing what those expectations are can help you with both analysis and communication. Some things are important to understand because they’re part of the brief. But other things are going to emerge as important because as you spend time with your conclusions you realise “Oh this is the thing!”

I had a colleague who would ask, as we were getting near to the end of the process, but still wallowing in a big mess “Okay, if we had to present this right now, what would you say?” This is a great technique for helping you stop looking intently at the trees and step back to see the forest.

How do you make sure research data takes priority over stakeholders’ opinions?

So many aspects of the research process are better thought of as, well, a process. Talking to stakeholders about their questions – and their assumptions about the answers – is a great way to start. In that kickoff stage, explain the process. Share stories and anecdotes from the field. Invite them to participate in analysis and synthesis. Their time is limited, but there are many lightweight ways to give them a taste of the research process as it proceeds.

You don’t want the results to be a grand reveal, but rather an evolution, so that they can evolve their thinking along with it. If you’re challenging closely held beliefs (or “opinions”), make a case: “I know we expected to learn X, but in fact, we found something different.” Separate what you learned about people from what should be done about it so that you can respond to pushback appropriately.

What are some common mistakes you see that stops research staying front and centre during the design process?

To summarise a few of the points I’ve made above, some of the common mistakes I see are:

  • Not including stakeholders in early problem-framing conversations
  • Not including a broader team in fieldwork and analysis
  • Delivering research framed as “how to change the product” rather than “what we learned about people” and “how to act on what we learned to impact the product”
  • Researchers not having visibility into subsequent decisions
  • Failing to deliver a range of types of research conclusions

How do you make sure your recommendations make it through to the next design iteration?

It’s challenging to ensure that research travels through any design or development process intact. Ideally, you’re involved as the work goes forward, sitting in meetings and design reviews to keep connecting it back to the output of the research, but think about the different aspects of the research that might take hold to help inform future decisions.

Is it stories about real people and their wants and needs? Is it a model or framework that helps structure a number of different types of users or behaviours? Is it a set of design principles? Or is it the specific recommendations? Often it’s a combination of several of these.  

About Steve Portigal

Steve Portigal photo

Steve is the Principal at Portigal Consulting LLC – a consultancy that helps companies discover and act on new insights about their customers and themselves. He is the author of Interviewing Users: How to Uncover Compelling Insights and recently Doorbells, Danger, and Dead Batteries: User Research War Stories. In addition to being an in-demand presenter and workshop leader, he writes on the topics of culture, design, innovation and interviewing users, and hosts the Dollars to Donuts podcast. He’s an enthusiastic traveller and an avid photographer with a Museum of Foreign Groceries in his home.

The post Making an Impact with UX Research Insights appeared first on UX Mastery.

]]>
https://uxmastery.com/making-an-impact-with-ux-research-insights/feed/ 2 54091