Matt Fenwick – UX Mastery https://uxmastery.com The online learning community for human-centred designers Sun, 26 Jul 2020 07:55:21 +0000 en-AU hourly 1 https://wordpress.org/?v=6.2.2 https://uxmastery.com/wp-content/uploads/2019/12/cropped-uxmastery_logotype_135deg-100x100.png Matt Fenwick – UX Mastery https://uxmastery.com 32 32 170411715 What do we do with all this mess? Enter content modelling https://uxmastery.com/content-modeling/ https://uxmastery.com/content-modeling/#respond Thu, 28 Sep 2017 00:00:38 +0000 http://uxmastery.com/?p=60698 As UX professionals, the tools we know – wireframes, customer journeys and site maps - are fine for a high-level vision.

But what happens when you have to get stuck into the guts of the content. When you’re wrangling a website with hundreds of pages, each one with text that runs down your monitor, across the table and onto the floor?

Matt Fenwick explains how content modelling brings order to the chaos.

The post What do we do with all this mess? Enter content modelling appeared first on UX Mastery.

]]>
We know that content is important, but we’re just not sure what to do with it.

As UX professionals, the tools we know – wireframes, customer journeys and site maps – are fine for a high-level vision.

But what happens when you have to get stuck into the guts of the content. When you’re wrangling a website with hundreds of pages, each one with text that runs down your monitor, across the table and onto the floor?

When you’re wrangling reams of page-level content, it can be tempting leave it in the too-hard basket. To design the website and leave it for the client to write the actual content.

What we’re doing here, though, is only designing part of the experience. We leave a vacuum that’s filled by ad hoc, inconsistent content, lorem ipsum, or worse.

Enter content modelling. It’s a discipline that began in the heady days of database design, but has been picked up and championed by content strategists around the world, including Rachel Lovinger.

In 2012, she published an article in A List Apart which set the stage. Here’s Rachel Lovinger’s definition:

A content model documents all the different types of content you will have for a given project. It contains detailed definitions of each content type’s elements and their relationships to each other.

We can break this up into three parts:

  1. Understanding what types of content you have
  2. Analysing the parts each type is made up of
  3. Defining how those parts relate to each other

What makes content modelling so different to other ways of working? That difference comes into focus when we compare content modelling to the conventional way of wrangling content.

The badlands of content

Conventional approaches to content are consistent – but only to a point. Let’s take a look at how that plays out in an actual website.

Imagine you’re a dementia researcher, based at the University of Sydney. You’re looking for money to fund the next round of your research, so you come to this grant page:


You wonder if there are other grants out there that you could apply for. That brings you to this grant page:


The same type of page, but two totally different content structures. You see that key dates are highlighted on one page, but not in another.

Why is this a problem? Well, it’s the same problem that arises wherever there is an inconsistent user experience. The user can’t learn a pattern that they can imply to interpret new instances of that experience: look at one grants page, and from that, work out how to read other grants pages.

The downside of this inconsistency was born out by user testing. Medical researchers told us they had to find a workaround: actually printing off grant pages, lining them up side-by-side and highlighting parts. That was the only way they could make sense of the content.

Content as blob

What’s interesting is that on one level, the experience actually is consistent. You can imagine these two grant pages being written using the same wireframe. They share a colour palette. It’s just that the consistency doesn’t go far enough – to the level where the user is actually interacting with the content.

The content itself is like plasticine: it has been shaped into a given form, but there’s no internal structure. It’s just a massive undifferentiated blob. You can’t break it down into components. You can’t do anything with the parts.

Content as lego

Another way we can think of a content model is like a lego set. The product has a shape and format – let’s say ‘rainbow princess castle’.

But it also has components that we can work with:

  1. Give them names (‘window’ or ‘turret’)
  2. Describe their attributes (‘round’ or ‘turquoise’)
  3. Combine them in different ways to create new formats (let’s say ‘ninja dragon cave’)

Now here’s part of a content model that we’ve created for the NHMRC.

  1. We’ve described a particular format, in this case a grants page.
  2. We’ve also specified some components that will exist within that page. For example, we have a block that talks about key dates.
  3. We’ve described the attribute of each component. In this case, it’s listing essential dates to mention. Other attributes could include whether that content is essential or optional.
  4. We’ve defined relationships between components: what brings us to this particular page, and some outbound paths: where they’re going to go next.

All this is great for making sense of a mess, but where it really takes off is what content modelling lets us do.

Content modelling at your service

Here are just a few ways you can put content modelling to work.

1. Analysing existing content

When you dive into the guts of page content, you get a much better sense of the site you’re dealing with. If you run a surface-level content audit, you might categorise page by topic. One topic.

When I get into content modelling on a government site, I’ll often find a mega page with six or seven moving parts that really need to be their own pages. That’s good to know for defining a better information architecture.

2. Communicating with designers

Content models inform design but don’t dictate it. The model provides an agreed list of the elements that are needed – and shows hierarchy and proximity. But how that actually looks is still up to the designers.

Because content models are presentation neutral, they’re also device neutral. Content models help you plan content that needs to flow between different devices (print – mobile – web) – as Sarah Wachter-Boettcher describes in her spectacular book, Content Everywhere.

3. Guiding authors

Content models put parameters around the content right when authors need it most – actually writing the content. You can specify exactly what elements a page needs. With the help of a tool like GatherContent, you can provide in-line guidance to authors on how to write a good headline, or intro paragraph.

Most big organisations have a writing style guide. And most staff in those organisations ignore it. It’s not because they don’t care about content, it’s because the advice isn’t integrated into their workflow.

That takes the burden off the authors — and everybody who has to QA the work.

4. Planning your CMS

A detailed content model can feed right into the specifications for a new content management system. Angus Gordon’s article is a good entry point here.

When you’re choosing a CMS, content modelling will help answer questions including:

  • What components are included
  • Whether the component is required
  • Whether alternative values are allowed (such as different content for geographical locations).

Wrap-up

As UX practitioners, we know that people don’t come to websites to appreciate pretty colours. But nor do they come to appreciate a slick UI, or an intuitive navigation. They come to get something done. That almost always means using information, and that information is content.

Wherever content starts to gain scale and complexity, content models are useful as a practical way to rein it in. To create content that integrates into your overall UX vision, and has the consistency you need for a complete experience.

Next time, we’ll run through how to actually create a content model.

The post What do we do with all this mess? Enter content modelling appeared first on UX Mastery.

]]>
https://uxmastery.com/content-modeling/feed/ 0 60698
Doing Strategy to People https://uxmastery.com/doing-strategy-to-people/ https://uxmastery.com/doing-strategy-to-people/#respond Tue, 28 Mar 2017 07:16:41 +0000 http://uxmastery.com/?p=52532 As consultants, we know there’s a right way to do websites. This belief often comes from a good place: We care about good design. We want to see it work.

The post Doing Strategy to People appeared first on UX Mastery.

]]>

“Strategy is the conversations between deliverables” – Kristina Halvorson

As consultants, we know there’s a right way to do websites. This belief often comes from a good place: We care about good design. We want to see it work.

But there’s a downside — we can get a touch judgy. We highlight everything that’s wrong with an organisation’s website (chaotic, redundant, and irrelevant), and feel duty-bound to point it all out. After all, that’s what we’re getting paid to do, right?

The risk is that we end up ‘doing strategy’ to our clients. They already know (or at least fear) that their website is a disaster area. Rather than helping our clients, pointing out these flaws tends to leave clients in a crumpled heap. The client may swallow the medicine, but chances are they’ll never want to come near us again. Kristina Halvorson, speaking at CS Forum 2016, had some very simple advice:

“Show clients how they can be even more awesome.”

Better collaboration through not being that guy

How can I apply her advice to my own work? Well, I need to start reframing my advice so I’m presenting opportunities more than naming faults, and stop being so quick off the draw with my opinions. That’s a work in progress, but there is a headstart to being more constructive: changing our language. Also at CS Forum, Hilary Marsh shared two simple phrases that can turn the dynamic around:

  • “You’re right”
  • “Let me show you how.”

This theme also emerged in Michael Metts’ presentation. “Strategy does not belong to us,” he said. “Instead, our message to clients is, ‘Let me come alongside you, see what you do, and find ways of doing it together’.”

In this vein, Kristina Halvorson suggested we shift our thinking from deliverables with rules to principles. A rule sits outside and judges, externally compelling you to do things that someone else deems right. A principle, by contrast, is internally motivating. It’s developed with the client, and can be as simple as a few guiding lights.

Talking strategy talk

Doing strategy better isn’t just about reframing our interpersonal relationships. It’s also about getting clear on how our work ties into business goals. As a free-roaming consultant/contractor, when I see a client with a clear brief and budget, it’s tempting to just pick up the project and run with it.

The danger here is that our work embeds siloed thinking. Nothing is transformed. Instead, we should interrogate the brief. We need to understand what’s happening on either side of our work.

How can we understand that broader context? The client’s ultimate objectives are unlikely to be ‘have a spectacular website’, or even necessarily ‘meet the needs of the user’. Every objective needs to be tied to a business driver, ratcheting up till we reach the organisation’s reason for existing. 

When it comes to business drivers, we should talk about them in whatever language the client uses. Those of us with technical backgrounds can go to the opposite end and overuse jargon – our jargon. We trot out the ‘XML’ and ‘schema.org’, because that’s smart talk. The same goes for designers, coders, and researchers – it’s easy to slip into our own jargon.

For me, as a trained writer, that means letting go of what I think I know, and not automatically deleting jargon. What if that’s the language that the audience is most comfortable with? If the client thinks in terms of ROI, we talk ROI. We make them feel comfortable and that we ‘get’ them, which builds up trust.

Outputs and incentives

As online communications people, we need to pay close attention to how we’re measured. At CS Forum, Max Johns examined incentives. Are content people’s reporting and reward systems geared around outputs such as blog posts published or reports laid out? By continuing to be measured as manufacturers, are we being excluded from vital conversations on strategy?

Reflecting on how to apply this, I don’t think content people can simply uncouple ourselves from output metrics — at least not straight away. A lot of managers, particularly those in the public service, are too used to a completion paradigm: success is measured by delivering a set of discrete products.

One idea, sparked by Max’s presentation, is that rather than just saying “Yes” to output metrics, we say “Yes, and…”. We produce that content piece, but also ask questions about effectiveness. “How do we know this stuff is working?” “What organisational objectives are we pursuing?”

We can even reframe what we call ourselves. When Rahel Anne Bailie, Chief Knowledge Officer for Scroll, worked for enterprise-level clients, she didn’t always call herself a content strategist. Sometimes, she was “a management consultant specialising in content turnarounds”.

We can be less attached to the divisions and micro-niches in our fields of expertise — and more attuned to the language the client uses at 3.00am, when they’re thinking about the problem.

How do you keep your clients and stakeholders on the same page? Share your tips in the comments, or over in the forums.

The post Doing Strategy to People appeared first on UX Mastery.

]]>
https://uxmastery.com/doing-strategy-to-people/feed/ 0 52532
Readability Tests: Magic Formula Or Recipe For Disaster? https://uxmastery.com/readability-tests-magic-formula-or-recipe-for-disaster/ https://uxmastery.com/readability-tests-magic-formula-or-recipe-for-disaster/#comments Wed, 11 Sep 2013 14:07:17 +0000 http://uxmastery.com/?p=5778 Automated readability tests give an indication of whether your text is easy to understand. But just how useful are they? Content strategist Matt Fenwick explores the pros, the cons, and reveals how he uses them in his business.

The post Readability Tests: Magic Formula Or Recipe For Disaster? appeared first on UX Mastery.

]]>
Readability tests promise so much. Just take a sample of your text, go to a free online tool, paste the text in, and out comes a number showing how easy your text is to understand.

If only it were that simple.

Readability tests have copped a lot of flack. Critics say that they’re far too simplistic to accurately predict how an actual reader will respond to your text.

I completely agree. And I use readability tests all the time.

I’m going to run through the limitations of these tests. Then I’ll show how, if you keep these limitations in mind, the tests can be immensely useful.

What are readability tests?

Readability tests have the same basic approach. They count the number of syllables, words or sentences in your text, then find the ratio between these elements. This formula generates a number, which you compare against a standard to determine how readable your text is.

I’ll go through one test to show you how they work.

The Gunning Fog Index aims to show how many years of formal education a reader would need to understand a text. The Index takes the average number of words per sentence, adds the percentage of complex words (words with three or more syllables), then multiplies the result by 0.4.

If the number is 12, that means someone would need 12 years of formal education to understand the text: they finished High School.

There are many other tests out there. The Fleisch-Kincaid Index is also popular—partly because it’s built into Microsoft Word.

Criticisms of readability tests

As the old saying goes, if something seems too good to be true, then it probably is. Critics say that you can’t use maths to predict how easy a text is to understand. A common criticism is that the number of syllables doesn’t always predict readability.

Here are two examples.

  1. “Do you accede” v “Do you agree?

    Both words have the same number of syllables. But because we use ‘agree’ far more often, the second sentence is easier to understand.

  2. “I stared at the television” v “I stared at the quincunx.”

    Here, ‘Television’ actually has more syllables, and so, would make a text harder to understand according to most readability formulas. But ‘television’ is understood by everyone over the age of two, while not many people need a Latin word for a group of five objects (file ‘quincunx’ under ‘may be useful someday’).

A further criticism is that the readability tests don’t tell you if a sentence makes sense. Take this example:

“This tree is jam.”

The words are simple, so the sentence would score well on a readability test. But it makes no sense whatsoever.

Why I still love readability tests

I use readability tests when I want a quick indication of how readable a chunk of text is—or when a numerical measure will appeal to stakeholders. Even taking the criticisms above on board, we can say that readability tests have some predictive value: if a readability test shows that content has problems, this will often be true.

There’s a reason why convoluted writing is engrained: professional people are used to writing this way. Simply hearing a writer’s opinion is often not enough to convince them to alter the habits of years—decades even. And it’s hard to have strategic conversations with senior executives about the state of communication in their organisation by combing through each sentence in a document.

That’s where readability tests come in. Because they generate numbers, you can aggregate data. For example, when I tested a 2,000 page website recently, I could say that: “Your target audience will find it difficult to understand 80% of your pages.” This then feeds into decisions about how much work is needed to bring the content up to scratch.

I’ve also used these tests with small-business owners as a starting point for conversations about how they can make their web content clearer, and more concrete.

I would only ever use readability tools as a diagnostic tool for content as a whole. The tools aren’t sophisticated enough for sentence-by-sentence analysis—or to be a checkpoint when clearing document.

The gold standard will always be testing the content with actual users. But if you need a rough picture, then readability tests are a useful tool for your kit.

Read on

The post Readability Tests: Magic Formula Or Recipe For Disaster? appeared first on UX Mastery.

]]>
https://uxmastery.com/readability-tests-magic-formula-or-recipe-for-disaster/feed/ 1 5778