alistapart :: Ooops! :: XML error: Invalid character at line 1518 :: alistapart @ the web & the world :: hundreds of fresh newsfeeds on
main destinations: home | the web & the world | out of here

news headlines

News headlines collected from 498 newsfeeds.



My Grandfather’s Travel Logs and Other Repetitive Tasks

My grandfather, James, was a meticulous recordkeeper. He kept handwritten journals detailing everything from his doctor visits to the daily fluctuations of stocks he owned. I only discovered this part of his life seven years after his death, when my family?s basement flooded on Christmas Eve in 2011 and we found his journals while cleaning up the damage. His travel records impressed me the most. He documented every trip he ever took, including dates, countries and cities visited, methods of travel, and people he traveled with. In total, he left the United States 99 times, visited 80 countries, and spent 1,223 days at sea on 48 ships.

A section of the handwritten travel log kept by the author?s grandfather
A section of the travel log.

I was only twenty-four when he died, so I hadn?t yet realized that I?d inherited many of his record-keeping, journaling, and collecting habits. And I had never had the chance to ask him many questions about his travels (like why he went to Venezuela twelve times or what he was doing in Syria and Beirut in the 1950s). So, in an effort to discover more about him, I decided to make an infographic of his travel logs.

Today, we take for granted that we can check stocks on our phones or go online and view records from doctor visits. The kinds of repetitive tasks my grandfather did might seem excessive, especially to young web developers and designers who?ve never had to do them. But my grandfather had no recording method besides pencil and paper for most of his life, so this was a normal and especially vital part of his daily routine.

A photograph of a ship called SS Amor, taken by the author?s grandfather in the West Indies in 1939.
SS Amor in the West Indies. Taken by the author?s grandfather in 1939.
A photograph of the New York City skyline, taken by the author?s grandfather, probably in the 1930s.
New York City. Taken by the author?s grandfather, probably in the 1930s.

Whether you?re processing Sass, minifying, or using Autoprefixer, you?re using tools to perform mundane and repetitive tasks that people previously had to do by hand, albeit in a different medium.

But what do you do when you?re faced with a problem that can?t be solved with a plugin, like my grandfather?s travel data? If you?re a designer, what?s the best way to structure unconventional data so you can just focus on designing?

My idea for the travel web app was to graph each country based on the number of my grandfather?s visits. As the country he visited the most (twenty-two times), Bermuda would have a graph bar stretching 100 percent across the screen, while a country he visited eleven times (St. Thomas, for example) would stretch roughly 50 percent across, the proportions adjusted slightly to fit the name and visits. I also wanted each graph bar to be the country?s main flag color.

The big issue to start was that some of the data was on paper and some was already transcribed into a text file. I could have written the HTML and CSS by hand, but I wanted to have the option to display the data in different ways. I needed a JSON file.

I tediously transcribed the remaining travel data into a tab-separated text file for the countries. I added the name, number of visits, and flag color:

honduras	    1    #0051ba
syria	1	#E20000
venezuela	    16    #fcd116
enewetak	2	rgb(0,56,147)

For the ships, I added the date and name:

1941    SS Granada
1944    USS Alimosa
1945    USS Alcoa Patriot

Manually creating a JSON file would have taken forever, so I used JavaScript to iterate through the text files and create two separate JSON files?one for countries and one for ships?which I would later merge.

First, I used Node readFileSync() and trim() to remove any quotation marks at the end of the file so as to avoid an empty object in the results:

const fs = require('fs');

let countriesData = fs.readFileSync('countries.txt', 'utf8')

This returned the contents of the countries.txt file and stored it in a variable called countriesData. At that point, I outputted the variable to the console, which showed that the data was lumped together into one giant string with a bunch of tabs (\t) and newlines (\n):

"angaur\t2\t#56a83c\nantigua\t5\t#ce1126\nargentina\t2\trgb(117,170,219)\naruba\t10\trgb(0,114,198)\nbahamas\t3\trgb(0,173,198)\nbarbados\t6\trgb(255,198,30)\nbermuda\t22\trgb(0,40,104)\nbonaire\t1\trgb(37,40,135)\nguyana\t2\trgb(0,158,73)\nhonduras\t1\trgb(0,81,186)\nvirgin Islands\t2\trgb(0,40,104)\nbrazil\t3\trgb(30,181,58)\nburma\t1\trgb(254,203,0)\ncanary Islands\t1\trgb(7,104,169)\ncanal Zone\t7\trgb(11,14,98)\ncarriacou\t1\trgb(239,42,12)\n ..."

Next, I split the string at the line breaks (\n):

const fs = require('fs');

let countriesData = fs.readFileSync('countries.txt', 'utf8')

After split(), in the console, the countries? data lived in an array:


I wanted to split each item of country data at the tabs, separating the name, number of visits, and color. To do this, I used map(), which iterates and runs a function on each item, returning something new. In this case, it split the string at each tab it found and returned a new array:

const fs = require('fs');

let countriesData = fs.readFileSync('countries.txt', 'utf8')
	.map(item => item.split('\t'));

After I used map(), countriesData was an array of arrays with each country and its data split into separate items:


To create the final output for each country, I used reduce(), which uses an accumulator and a function to create something new, whether that?s an object, a value, or an array. Accumulator is a fancy way of referring to the end product, which in our case is an object ({}).

const fs = require('fs');

let countriesData = fs.readFileSync('countries.txt', 'utf8')
	.map(item => item.split('\t'))
	.reduce((countries, item) => {
		return countries;
	}, {countries: []});

I knew I wanted {countries: []} to contain the data. So instead of creating it on the first pass and testing whether it existed on each iteration, I added {countries: []} to the resulting object. That way, it existed before I started iterating.

This process returned an empty object because I hadn?t told reduce() what to do with each array of data.

To fix this, I used reduce() to push and add a new object for each country with the name (item[0]), visits (item[1]), and color (item[2]) into the end result object. Finally, I used a capitalization function on each name value to ensure formatting would be consistent.

const fs = require('fs');

const cap = (s) => {
  return s.charAt(0).toUpperCase() + s.slice(1);

let countriesData = fs.readFileSync('countries.txt', 'utf8')
	.map(item => item.split('\t'))
	.reduce((countries, item) => {
			name: cap(item[0]),
      			visits: item[1],
      			color: item[2]
		return countries;
	}, {countries: []});

I used the same method for the ships.txt file and merged the two using Object.assign, a method that takes two objects and creates a new one.

let result = Object.assign({}, countriesData, shipsData);

I could have created a function that took a text file and an object, or created a form-to-JSON tool, but these seemed like overkill for this project, and I had already transcribed some of the data into separate files before even conceiving of the infographic idea. The final JSON result can be found on CodePen.

I used the JSON data to create the infographic bars, defining the layout for each one with CSS Grid and dynamic styles for width and color. Check out the final product at I think my grandfather would have enjoyed seeing his handwritten logs transformed into a visual format that showcases the breadth of his travels.

He passed away in 2005, but I remember showing him my Blackberry and explaining the internet to him, showing him how he could look at pictures from around the world and read articles. He took a sip of his martini and sort of waved his hand at the screen. I think he preferred handwritten notes and life outside of the internet, something many of us can appreciate. After sifting through all his travel logs, I more clearly understood the importance he placed on having different experiences, meeting new people, and fearlessly exploring the world. To him, his travels were more than just dates on a page. Now they?re more than that for me, too.

The author wishes to thank Mattias Petter Johansson, whose video series, ?Fun Fun Function,” inspired some of the thinking in this article.

How the Sausage Gets Made: The Hidden Work of Content

I won an Emmy for keeping a website free of dick pics.

Officially, my award certificate says I was on a team that won a 2014 Emmy for Interactive Media, Social TV Experience. The category ?Social TV Experience? sounds far classier than my true contribution to the project.

The award-winning Live From Space site served as a second-screen experience for a National Geographic Channel show of the same name. The show Live From Space covered the wonders of the International Space Station. The website displayed the globe as seen by astronauts, along with entertaining social data about each country crossed by the Space Station?s trajectory. One of those data points was an Instagram feed showcasing images of local cuisine.

Image of the National Geographic Channel?s Live From Space second-screen experience, including an Instagram photo of an Australian repast.
The second-screen experience for National Geographic Channel?s Live From Space event, featuring an Instagram photo of local food.

You might think that adding this feed was a relatively simple task. Include a specific channel, or feed in images tagged with the food and the country in which the images were taken, connect to an API, and boom: a stream of images from food bloggers in South Africa, Taiwan, Mexico, what have you. One exec was so impressed that he called this feature ?automagical.?

What he described as ?automagical? was actually me sitting in front of a computer screen, scanning Instagram, hunting for the most appetizing images, avoiding the unappetizing ones, and pasting my choices into a spreadsheet for import by a developer. I wouldn?t call it automated, and I wouldn?t call it magical. As the team?s content manager, I performed this task because the Instagram API wasn?t playing nice with the developers, but we had to get that information into the site by the deadline somehow.

An additional, and perhaps worse, problem was that if you found a feed of images taken in certain countries and tagged #food, you might get pictures of sausage. But we?re talking about the kinds of sausages usually discussed in locker rooms or on school buses full of junior high boys. As you can imagine, you cannot add Instagram photos tagged #food to a family-friendly site without a little effort, either in terms of getting around an API or filtering out the naughty bits.

The mythical ?automagical? tool

You might think I?m knocking the website, but I?m not. Many creative, brilliant people worked ridiculous hours to create a gorgeous experience for which they rightly earned an award, and the images of local cuisine made up only a small slice of the site?s data.

Yet I feel conflicted about my own involvement with Live From Space because most of the site?s users still have no idea how the sausage of apps and websites gets made. In fact, these people may never know because the site is no longer live.

Or they may not care. Few people are aware of the rote work that goes into moving or importing data from one website to another, which causes problems if they don?t understand how long the process takes to make content happen. Unless you?re working with a pristine data source, there often is no ?content hose? or ?automagical? tool that cleans up data and moves it from one app or content management system to another. Unfortunately, the assumption that a ?content hose? exists can lead to miscommunication, frustration, and delays when it is time to produce the work.

Oftentimes, a person will need to go in, copy content, and paste that code into the new app or CMS. They must repeat this task until the app or site is ready for launch. This type of work usually spurs revolt within the workplace, and I can?t say I blame people for being upset. Unless you know some tips, tricks, and shortcuts, as I do, you have a long stretch of tedious, mind-numbing work ahead of you.

Did someone say shortcuts?

Yes, you do have shortcuts when it comes to pulling content into a website. Those shortcuts happen earlier in the site-building process than you may think, and they rely on making sure your entire team is involved in the content process.

The most important thing when you are creating a new site or migrating an existing one is to lock down the content you want to bring in, as early as possible.

In the case of the National Geographic Channel website, the team knew it needed the map data and the coordinates, but did it really need the Instagram feed with the food data? And, when the creative team decided it needed the food data, did anyone ask questions about how the food data would be drawn into the site?

This involves building tactical questions into the creative workflow. When someone is on a creative roll, the last thing I want to do is slow them down by asking overly tactical questions. But all brainstorming sessions should include a team member who is taking notes as the ideas fly so they can ask the crucial questions later:

  • Where will this content come from?
  • Do we have a team member who can generate this content from a data feed or from scratch?
  • If not, do we need to hire someone?

These questions are nothing new to a content strategist, but the questions must be asked in the earliest stages of the project. Think about it: if your team is in love with an idea, and the client falls in love with it, too, then you will have a harder time changing course if you can?t create the content that makes the site run.

Site updates and migrations are a little bit different in that most of the content exists, but you?d be surprised by how few team members know their content. Right now, I am working for a company that helps universities revamp their considerably large websites, and the first thing we do when making the sausage is halve the recipe.

First, we use Screaming Frog to generate a content inventory, which we spot-check for any unusual features that might need to be incorporated into the new site. Then we pass the inventory to the client, asking them to go through the inventory and archive duplicate or old content. Once they archive the old content, they can focus on what they intend to revise or keep as is.

Image of an in-progress content inventory for one of iFactory?s current clients, a large community college.
A work-in-progress content inventory for a large community college.

During the first few weeks of any project, I check in with the client about how they are doing with their content archive. If they aren?t touching the content early, we schedule a follow-up meeting and essentially haunt them until they make tough decisions.

Perfecting the process

How do we improve the way our teams relate to content? How do we show them how the content sausage gets made without grossing anyone out? Here are a few tips:

Your content strategist and your developer need to be on speaking terms. ?Content strategist? isn?t a fancy name for a writer or an editor. A good content strategist knows how to work with developers. For one site migration involving a community college, I used Screaming Frog to scrape the content from the original site. Then I passed the resulting .csv document back and forth to the developer, fine-tuning the alignment of fields each time so it would be easier for us to import the material into GatherContent, an editorial tool for digital projects.

Speaking of GatherContent ... set up a proper content workflow. GatherContent allows you to assign specific tasks to team members so you can divide work. Even better, GatherContent?s editorial tool allows each page to pass through specific points in the editorial process, including drafting, choosing pictures, adding tags, and uploading to the CMS.

Train the team on how to transform the current content. In my current workplace, not only do we train the client on how to use the CMS, but we also provide Content Guidelines, an overview of the basic building blocks that make up a web page. I?ve shown clients how to create fields for page metadata, images, image alt text, and downloads?and we do this early so the client doesn?t wait until the last minute to dive into details.

Sample slides from an iFactory Content Guidelines presentation.
Sample slides from a Content Guidelines presentation for one of iFactory?s current clients.

Actually make the sausage. Clever uses of tools and advance training can only go so far. At some point you will need to make sure that what is in the CMS lines up with what you intended. You may need to take your content source, remove any odd characters, shift content from one field to another, and make the content safe for work?just like removing dick pics.

Make sure everyone on your team scrapes, scrubs, and uploads content at least once. Distributing the work ensures that your team members think twice before recommending content that doesn?t exist or content that needs a serious cleanup. That means each team member should sit down and copy content directly into the CMS or scrub the content that is there. An hour or two is enough to transform perspectives.

Push back if a team member shirks his or her content duty. Occasionally, you will encounter people who believe their roles protect them from content. I?ve heard people ask, ?Can?t we get an intern to do that?? or ?Can?t we do that through Mechanical Turk?? Sometimes, these people mean well and are thinking of efficiency, but other times, their willingness to brush content off as an intern task or as a task worth a nickel or two should be alarming. It?s demeaning to those who do the work for starters, but it also shows that they are cavalier about content. Asking someone to pitch in for content creation or migration is a litmus test. If they don?t seem to take content seriously, you have to ask: just how committed are these people to serving up a quality digital experience? Do you even want them on your team in the future? By the way, I?ve seen VPs and sales team members entering content in a website, and every last one of them told me that the experience was eye-opening.

People are the ?automagical? ingredient

None of these shortcuts and process tips are possible without some kind of hidden content work. Content is often discussed in terms of which gender does what kind of work and how they are recognized for it. This worthwhile subject is covered in depth by many authors, especially in the context of social media, but I?d like to step back and think about why this work is hidden and how we can avoid delays, employee revolts, and overall tedium in the future.

Whether you?re scraping, scrubbing, copying, or pasting, the connecting thread for all hidden content work is that nearly no one thinks of it until the last minute. In general, project team members can do a better job of thinking about how content needs to be manipulated to fit a design or a data model. Then they should prepare their team and the client for the amount of work it will take to get content ready and entered into a site. By taking the initiative, you can save time, money, and sanity. If you?re really doing it right, you can make a site that?s the equivalent of a sausage ? without dubious ingredients.


The Best Request Is No Request, Revisited

Over the last decade, web performance optimization has been controlled by one indisputable guideline: the best request is no request. A very humble rule, easy to interpret. Every network call for a resource eliminated improves performance. Every src attribute spared, every link element dropped. But everything has changed now that HTTP/2 is available, hasn?t it? Designed for the modern web, HTTP/2 is more efficient in responding to a larger number of requests than its predecessor. So the question is: does the old rule of reducing requests still hold up?

What has changed with HTTP/2?

To understand how HTTP/2 is different, it helps to know about its predecessors. A brief history follows. HTTP builds on TCP. While TCP is powerful and is capable of transferring lots of data reliably, the way HTTP/1 utilized TCP was inefficient. Every resource requested required a new TCP connection. And every TCP connection required synchronization between the client and server, resulting in an initial delay as the browser established a connection. This was OK in times when the majority of web content consisted of unstyled documents that didn?t load additional resources, such as images or JavaScript files.

Updates in HTTP/1.1 try to overcome this limitation. Clients are able to use one TCP connection for multiple resources, but still have to download them in sequence. This so-called ?head of line blocking? makes waterfall charts actually look like waterfalls:

Figure 1. Schematic waterfall of assets loading over one pipelined TCP connection
Figure 1. Schematic waterfall of assets loading over one pipelined TCP connection

Also, most browsers started to open multiple TCP connections in parallel, limited to a rather low number per domain. Even with such optimizations, HTTP/1.1 is not well-suited to the considerable number of resources of today?s websites. Hence the saying ?The best request is no request.? TCP connections are costly and take time. This is why we use things like concatenation, image sprites, and inlining of resources: avoid new connections, and reuse existing ones.

HTTP/2 is fundamentally different than HTTP/1.1. HTTP/2 uses a single TCP connection and allows more resources to be downloaded in parallel than its predecessor. Think of this single TCP connection as one broad tunnel where data is sent through in frames. On the client, all packages get reassembled into their original source. Using a couple of link elements to transfer style sheets is now as practically efficient as bundling all of your style sheets into one file.

Figure 2. Schematic waterfall of assets loading over one shared TCP connection
Figure 2. Schematic waterfall of assets loading over one shared TCP connection

All connections use the same stream, so they also share bandwidth. Depending on the number of resources, this might mean that individual resources could take longer to be transmitted to the client side on low-bandwidth connections.

This also means that resource prioritization is not done as easily as it was with HTTP/1.1: the order of resources in the document had an impact on when they begin to download. With HTTP/2, everything happens at the same time! The HTTP/2 spec contains information on stream prioritization, but at the time of this writing, placing control over prioritization in developers? hands is still in the distant future.

The best request is no request: cherry-picking

So what can we do to overcome the lack of waterfall resource prioritization? What about not wasting bandwidth? Think back to the first rule of performance optimization: the best request is no request. Let?s reinterpret the rule.

For example, consider a typical webpage (in this case, from Dynatrace). The screenshot below shows a piece of online documentation consisting of different components: main navigation, a footer, breadcrumbs, a sidebar, and the main article.

Figure 3. A typical website split into a few components
Figure 3. A typical website split into a few components

On other pages of the same site, we have things like a masthead, social media outlets, galleries, or other components. Each component is defined by its own markup and style sheet.

In HTTP/1.1 environments, we would typically combine all component style sheets into one CSS file. The best request is no request: one TCP connection to transfer all the CSS necessary, even for pages the user hasn?t seen yet. This can result in a huge CSS file.

The problem is compounded when a site uses a library like Bootstrap, which reached the 300 kB mark, adding site-specific CSS on top of it. The actual amount of CSS required by any given page, in some cases, was even less than 10% of the amount loaded:

Figure 4. Code coverage of a random cinema webpage that uses 10% of the bundled 300 kB CSS. This page is built upon Bootstrap.
Figure 4. Code coverage of a random cinema webpage that uses 10% of the bundled 300 kB CSS. This page is built upon Bootstrap.

There are even tools like UnCSS that aim to get rid of unused styles.

The Dynatrace documentation example shown in figure 3 is built with the company?s own style library, which is tailored to the site?s specific needs as opposed to Bootstrap, which is offered as a general purpose solution. All components in the company style library combined add up to 80 kB of CSS. The CSS actually used on the page is divided among eight of those components, totaling 8.1 kB. So even though the library is tailored to the specific needs of the website, the page still uses only around 10% of the CSS it downloads.

HTTP/2 allows us to be much more picky when it comes to the files we want to transmit. The request itself is not as costly as it is in HTTP/1.1, so we can safely use more link elements, pointing directly to the elements used on that particular page:

<link rel="stylesheet" href="/css/base.css">
<link rel="stylesheet" href="/css/typography.css">
<link rel="stylesheet" href="/css/layout.css">
<link rel="stylesheet" href="/css/navbar.css">
<link rel="stylesheet" href="/css/article.css">
<link rel="stylesheet" href="/css/footer.css">
<link rel="stylesheet" href="/css/sidebar.css">
<link rel="stylesheet" href="/css/breadcrumbs.css">

This, of course, is true for every sprite map or JavaScript bundle as well. By just transferring what you actually need, the amount of data transferred to your site can be reduced greatly! Compare the download times for bundle and single files shown with Chrome timings below:

Figure 5. Download of the bundle. After the initial connection is established, the bundle takes 583 ms to download on regular 3G.
Figure 5. Download of the bundle. After the initial connection is established, the bundle takes 583 ms to download on regular 3G.
Figure 6. Split only the files needed, and download them in parallel. The initial connection takes about as long, but the content (one style sheet, in this case) downloads much faster because it is smaller.
Figure 6. Split only the files needed, and download them in parallel. The initial connection takes about as long, but the content (one style sheet, in this case) downloads much faster because it is smaller.

The first image shows that including the time required for the browser to establish the initial connection, the bundle needs about 700 ms to download on regular 3G connections. The second image shows timing values for one CSS file out of the eight that make up the page. The beginning of the response (TTFB) takes as long, but since the file is a lot smaller (less than 1 kB), the content is downloaded almost immediately.

This might not seem impressive when looking at only one resource. But as shown below, since all eight style sheets are downloaded in parallel, we still can save a great deal of transfer time when compared to the bundle approach.

Figure 7. All style sheets on the split variant load in parallel.
Figure 7. All style sheets on the split variant load in parallel.

When running the same page through on regular 3G, we can see a similar pattern. The full bundle (main.css) starts to download just after 1.5 s (yellow line) and takes 1.3 s to download; the time to first meaningful paint is around 3.5 seconds (green line):

Figure 8. Full page download of the bundle, regular 3G.
Figure 8. Full page download of the bundle, regular 3G.

When we split up the CSS bundle, each style sheet starts to download at 1.5 s (yellow line) and takes 315?375 ms to finish. As a result, we can reduce the time to first meaningful paint by more than one second (green line):

Figure 9. Downloading single files instead, regular 3G.
Figure 9. Downloading single files instead, regular 3G.

Per our measurements, the difference between bundled and split files has more impact on slow 3G than on regular 3G. On the latter, the bundle needs a total of 4.5 s to be downloaded, resulting in a time to first meaningful paint at around 7 s:

Figure 10. Bundle, slow 3G.
Figure 10. Bundle, slow 3G.

The same page with split files on slow 3G connections via results in meaningful paint (green line) occurring 4 s earlier:

Figure 11. Split files, slow 3G.
Figure 11. Split files, slow 3G.

The interesting thing is that what was considered a performance anti-pattern in HTTP/1.1—using lots of references to resources—becomes a best practice in the HTTP/2 era. Plus, the rule stays the same! The meaning changes slightly.

The best request is no request: drop files and code your users don?t need!

It has to be noted that the success of this approach is strongly connected to the number of resources transferred. The example above used 10% of the original style sheet library, which is an enormous reduction in file size. Downloading the whole UI library in split-up files might give different results. For example, Khan Academy found that by splitting up their JavaScript bundles, the overall application size?and thus the transfer time–became drastically worse. This was mainly because of two reasons: a huge amount of JavaScript files (close to 100), and the often underestimated powers of Gzip.

Gzip (and Brotli) yields higher compression ratios when there is repetition in the data it is compressing. This means that a Gzipped bundle typically has a much smaller footprint than Gzipped single files. So if you are going to download a whole set of files anyway, the compression ratio of bundled assets might outperform that of single files downloaded in parallel. Test accordingly.

Also, be aware of your user base. While HTTP/2 has been widely adopted, some of your users might be limited to HTTP/1.1 connections. They will suffer from split resources.

The best request is no request: caching and versioning

To this point with our example, we?ve seen how to optimize the first visit to a page. The bundle is split up into separate files and the client receives only what it needs to display on a page. This gives us the chance to look into something people tend to neglect when optimizing for performance: subsequent visits.

On subsequent visits we want to avoid re-transferring assets unnecessarily. HTTP headers like Cache-Control (and their implementation in servers like Apache and NGINX) allow us to store files on the user?s disk for a specified amount of time. Some CDN servers default that to a few minutes. Some others to a few hours or days even. The idea is that during a session, users shouldn?t have to download what they already have in the past (unless they?ve cleared their cache in the interim). For example, the following Cache-Control header directive makes sure the file is stored in any cache available, for 600 seconds.

Cache-Control: public, max-age=600

We can leverage Cache-Control to be much more strict. In our first optimization we decided to cherry-pick resources and be choosy about what we transfer to the client, so let?s store these resources on the machine for a long period of time:

Cache-Control: public, max-age=31536000

The number above is one year in seconds. The usefulness in setting a high Cache-Control max-age value is that the asset will be stored by the client for a long period of time. The screenshot below shows a waterfall chart of the first visit. Every asset of the HTML file is requested:

Figure 12. First visit: every asset is requested.
Figure 12. First visit: every asset is requested.

With properly set Cache-Control headers, a subsequent visit will result in less requests. The screenshot below shows that all assets requested on our test domain don?t trigger a request. Assets from another domain with improperly set Cache-Control headers still trigger a request, as do resources which haven?t been found:

Figure 13. Second visit: only some poorly cached SVGs from a different server are requested again.
Figure 13. Second visit: only some poorly cached SVGs from a different server are requested again.

When it comes to invalidating the cached asset (which, consequently, is one of the two hardest things in computer science), we simply use a new asset instead. Let?s see how that would work with our example. Caching works based on file names. A new file name triggers a new download. Previously, we split up our code base into reasonable chunks. A version indicator makes sure that each file name stays unique:

<link rel="stylesheet" href="/css/header.v1.css">
<link rel="stylesheet" href="/css/article.v1.css">

After a change to our article styles, we would modify the version number:

<link rel="stylesheet" href="/css/header.v1.css">
<link rel="stylesheet" href="/css/article.v2.css">

An alternative to keeping track of the file?s version is to set a revision hash based on the file?s content with automation tools.

It?s OK to store your assets on the client for a long period of time. However, your HTML should be more transient in most cases. Typically, the HTML file contains the information about which resources to download. Should you want your resources to change (such as loading article.v2.css instead of article.v1.css, as we just saw), you?ll need to update references to them in your HTML. Popular CDN servers cache HTML for no longer than six minutes, but you can decide what?s better suited for your application.

And again, the best request is no request: store files on the client as long as possible, and don?t request them over the wire ever again. Recent Firefox and Edge editions even sport an immutable directive for Cache-Control, targeting this pattern specifically.

Bottom line

HTTP/2 has been designed from the ground up to address the inefficiencies of HTTP/1. Triggering a large number of requests in an HTTP/2 environment is no longer inherently bad for performance; transferring unnecessary data is.

To reach the full potential of HTTP/2, we have to look at each case individually. An optimization that might be good for one website can have a negative effect on another. With all the benefits that come with HTTP/2 , the golden rule of performance optimization still applies: the best request is no request. Only this time we take a look at the actual amount of data transferred.

Only transfer what your users actually need. Nothing more, nothing less.

Faux Grid Tracks

A little while back, there was a question posted to css-discuss:

Is it possible to style the rows and columns of a [CSS] grid?the grid itself? I have an upcoming layout that uses what looks like a tic-tac-toe board?complete with the vertical and horizontal lines of said tic-tac-toe board?with text/icon in each grid cell.

This is a question I expect to come up repeatedly, as more and more people start to explore Grid layout. The short answer is: no, it isn?t possible to do that. But it is possible to fake the effect, which is what I?d like to explore.

Defining the grid

Since we?re talking about tic-tac-toe layouts, we?ll need a containing element around nine elements. We could use an ordered list, or a paragraph with a bunch of <span>s, or a <section> with some <div>s. Let?s go with that last one.

<section id="ttt">

We?ll take those nine <div>s and put them into a three-by-three grid, with each row five ems high and each column five ems wide. Setting up the grid structure is straightforward enough:

#ttt {
	display: grid;
	grid-template-columns: repeat(3,5em);
	grid-template-rows: repeat(3,5em);

That?s it! Thanks to the auto-flow algorithm inherent in Grid layout, that?s enough to put the nine <div> elements into the nine grid cells. From there, creating the appearance of a grid is a matter of setting borders on the <div> elements. There are a lot of ways to do this, but here?s what I settled on:

#ttt > * {
	border: 1px solid black;
	border-width: 0 1px 1px 0;
	display: flex; /* flex styling to center content in divs */
	align-items: center;
	justify-content: center;
#ttt > *:nth-of-type(3n)  {
	border-right-width: 0;
#ttt > *:nth-of-type(n+7) {
	border-bottom-width: 0;

The result is shown in the basic layout below.

Screenshot: The basic layout features a 3x3 grid with lines breaking up the grid like a tic-tac-toe board.
Figure 1: The basic layout

This approach has the advantage of not relying on class names or what-have-you. It does fall apart, though, if the grid flow is changed to be columnar, as we can see in Figure 2.

#ttt {
	display: grid;
	grid-template-columns: repeat(3,5em);
	grid-template-rows: repeat(3,5em);
	grid-auto-flow: column;  /* a change in layout! */
Screenshot: If you switch the grid to columnar flow order, the borders get out of whack. Instead of a tic-tac-toe board, the right-most horizontal borders have moved to the bottom of the grid and the bottom-most vertical borders have moved to the right edge.
Figure 2: The basic layout in columnar flow order

If the flow is columnar, then the border-applying rules have to get flipped, like this:

#ttt > *:nth-of-type(3n) {
	border-bottom-width: 0;
#ttt > *:nth-of-type(n+7) {
	border-right-width: 0;

That will get us back to the result we saw in Figure 1, but with the content in columnar order instead of row order. There?s no row reverse or column reverse in Grid like there is in flexbox, so we only have to worry about normal row and columnar flow patterns.

But what if a later change to the design leads to grid items being rearranged in different ways? For example, there might be a reason to take one or two of the items and display them last in the grid, like this:

#ttt > *:nth-of-type(4), #ttt > *:nth-of-type(6) {
	order: 66;

Just like in flexbox, this will move the displayed grid items out of source order, placing them after the grid items that don?t have explicit order values. If this sort of rearrangement is a possibility, there?s no easy way to switch borders on and off in order to create the illusion of the inner grid lines. What to do?

Attack of the filler <b>s!

If we want to create standalone styles that follow grid tracks?that is, presentation aspects that aren?t directly linked to the possibly-rearranged content?then we need other elements to place and style. They likely won?t have any content, making them a sort of structural filler to spackle over the gaps in Grid?s capabilities.

Thus, to the <section> element, we can add two <b> elements with identifiers.

<section id="ttt">
	<b id="h"></b>
	<b id="v"></b>

These ?filler <b>s,? as I like to call them, could be placed anywhere inside the <section>, but the beginning works fine. We?ll stick with that. Then we add these styles to our original grid from the basic layout:

b[id] {
	border: 1px solid gray;
b#h {
	grid-column: 1 / -1;
	grid-row: 2;
	border-width: 1px 0;
b#v {
	grid-column: 2;
	grid-row: 1 / -1;
	border-width: 0 1px;

The 1 / -1 means ?go from the first grid line to the last grid line of the explicit grid?, regardless of how many grid lines there might be. It?s a handy pattern to use in any situation where you have a grid item meant to stretch from edge to edge of a grid.

So the horizontal <b> has top and bottom borders, and the vertical <b> has left and right borders. This creates the board lines, as shown in Figure 3.

Screenshot: With the filler b tags, you can see the tic-tac-toe board again. But only the corners of the grid are filled with content, and there are 5 cells below the board as the grid lines have displaced the content.
Figure 3: The basic layout with ?Filler <b>s?

Hold on a minute: we got the tic-tac-toe grid back, but now the numbers are in the wrong places, which means the <div>s that contain them are out of place. Here?s why: the <div> elements holding the actual content will no longer auto-flow into all the grid cells, because the filler <b>s are already occupying five of the nine cells. (They?re the cells in the center column and row of the grid.) The only way to get the <div> elements into their intended grid cells is to explicitly place them. This is one way to do that:

div:nth-of-type(3n+1) {
	grid-column: 1;
div:nth-of-type(3n+2) {
	grid-column: 2;
div:nth-of-type(3n+3) {
	grid-column: 3;
div:nth-of-type(-n+3) {
	grid-row: 1;
div {
	grid-row: 2;
div:nth-of-type(n+7) {
	grid-row: 3;

That works if you know the content will always be laid out in row-then-column order. Switching to column-then-row requires rewriting the CSS. If the contents are to be placed in a jumbled-up order, then you?d have to write a rule for each <div>.

This probably suffices for most cases, but let?s push this even further. Suppose you want to draw those grid lines without interfering with the automatic flow of the contents. How can this be done?


It would be handy if there were a property to mark elements as not participating in the grid flow, but there isn?t. So instead, we?ll split the contents and filler into their own grids, and use a third grid to put one of those grids over the other.

This will necessitate a bit of structural change to make happen, because for it to work, the contents and the filler <b>s have to have identical grids. Thus we end up with:

<section id="ttt">
	<div id="board">
		<b id="h"></b>
		<b id="v"></b>
	<div id="content">

The first thing is to give the board and the content <div>s identical grids. The same grid we used before, in fact. We just change the #ttt rule?s selector a tiny bit, to select the children of #ttt instead:

#ttt > * {
	display: grid;
	grid-template-columns: repeat(3,5em);
	grid-template-rows: repeat(3,5em);

Now that the two grids have the same layout, we need to place one over the other. We could relatively position the #ttt container and absolutely position its children, but there?s another way: use Grid.

#ttt { /* new rule added */
	display: grid;
#ttt > * {
	display: grid;
	grid-template-columns: repeat(3,5em);
	grid-template-rows: repeat(3,5em);

But wait?where are the rows and columns for #ttt? Where we?re going, we don?t need rows (or columns). Here is how the two grids end up occupying the same area with one on top of the other:

#ttt {
	display: grid;
#ttt > * {
	display: grid;
	grid-template-columns: repeat(3,5em);
	grid-template-rows: repeat(3,5em);
	grid-column: 1;  /* explicit grid placement */
	grid-row: 1;  /* explicit grid placement */

So #ttt is given a one-cell grid, and its two children are explicitly placed in that single cell. Thus one sits over the other, as with positioning?but unlike positioning, the outer grid?s size is dictated by the layout of its children. It will resize to surround them, even if we later change the inner grids to be larger (or smaller). We can see this in practice in Figure 4, where the outer grid is outlined in purple in Firefox?s Grid inspector tool.

Screenshot: In the Firefox Grid Inspector, the containing grid spans the full width of the page with a purple border. Occupying about a third of the space on the left side of the container are the two child grids, one with the numbers 1 through 9 in a 3 by 3 grid and the other with tic-tac-toe lines overlaid on top of each other.
Figure 4: The overgridded layout

And that?s it. We could take further steps, like using z-index to layer the board on top of the content (by default, the element that comes later in the source displays on top of the element that comes earlier), but this will suffice for the case we have here.

The advantage is that the content <div>, having only its own contents to worry about, can make use of grid-auto-flow and order to rearrange things. As an example, you can do things like the following and you won’t need all of the :nth-of-type grid item placements from our earlier CSS. Figure 5 shows the result.

/* added to #snippet13 code */
#ttt > #content {
	grid-auto-flow: column;
#ttt > #content > :nth-child(5) {
	order: 2;
Screenshot: The overgridded version, where the numbered 3 by 3 grid is overlaid on top of the tic-tac-toe board, continues to work fine if you reorder the cells. In this case, the number 5 has moved from the central grid cell to the bottom right.
Figure 5: Moving #5 to the end and letting the other items reflow into columns


The downside here, and it?s a pretty big one, is that the board and content grids are only minimally aware of each other. The reason the previous example works is the grid tracks are of fixed size, and none of the content is overflowing. Suppose we wanted to make the columns and rows resize based on content, like this:

#content {
	grid-template-columns: repeat(3,min-content);
	grid-template-rows: repeat(3,min-content);

This will fall apart quickly, with the board lines not corresponding to the layout of the actual content. At all.

In other words, this overlap technique sacrifices one of Grid?s main strengths: the way grid cells relate to other grid cells. In cases where content size is predictable but ordering is not, it?s a reasonable trade-off to make. In other cases, it probably isn?t a good idea.

Bear in mind that this really only works with layouts where sizes and placements are always known, and where you sometimes have to layer grids on top of one another. If your Filler <b> comes into contact with an implicitly-placed grid item in the same grid as it occupies, it will be blocked from stretching. (Explicitly-placed grid items, i.e., those with author-declared values for both grid-row and grid-column, do not block Filler <b>s.)

Why is this useful?

I realize that few of us will need to create a layout that looks like a tic-tac-toe board, so you may wonder why we should bother. We may not want octothorpe-looking structures, but there will be times we want to style an entire column track or highlight a row.

Since CSS doesn?t (yet) offer a way to style grid cells, areas, or tracks directly, we have to stretch elements over the parts we want to style independently from the elements that contain content. There is a discussion about adding this capability directly to CSS in the Working Group?s GitHub repository, where you can add your thoughts and proposals.

But why <b>s? Why?

I use <b>s for the decorative portions of the layout because they?re purely decorative elements. There?s no content to strongly emphasize or to boldface, and semantically a <b> isn?t any better or worse than a <span>. It?s just a hook on which to hang some visual effects. And it?s shorter, so it minimizes page bloat (not that a few characters will make all that much of a difference).

More to the point, the <b>?s complete lack of semantic meaning instantly flags it in the markup as being intentionally non-semantic. It is, in that meta sense, self-documenting.

Is this all there is?

There?s another way to get this precise effect: backgrounds and grid gaps. It comes with its own downsides, but let?s see how it works first. First, we set a black background for the grid container and white backgrounds for each item in the grid. Then, by using grid-gap: 1px, the black container background shows between the grid items.

<section id="ttt">
#ttt {
	display: grid;
	grid-template-columns: repeat(3,5em);
	grid-template-rows: repeat(3,5em);
	background: black;
	grid-gap: 1px;
#ttt > div {
	background: white;

Simple, no Filler <b>s needed. What?s not to like?

The first problem is that if you ever remove an item, there will be a big black block in the layout. Maybe that?s OK, but more likely it isn?t. The second problem is that grid containers do not, by default, shrink-wrap their items. Instead, they fill out the parent element, as block boxes do. Both of these problems are illustrated in Figure 6.

Screenshot: When a grid cell goes missing with the background and grid-gap solution, it leaves a big black box in its place. There's also a giant black box filling the rest of the space to the right of the grid cells.
Figure 6: Some possible background problems

You can use extra CSS to restrict the width of the grid container, but the background showing through where an item is missing can?t really be avoided.

On the other hand, these problems could become benefits if, instead of a black background, you want to show a background image that has grid items ?punch out? space, as Jen Simmons did in her ?Jazz At Lincoln Center Poster? demo.

A third problem with using the backgrounds is when you just want solid grid lines over a varied page background, and you want that background to show through the grid items. In that case, the grid items (the <div>s in this case) have to have transparent backgrounds, which prevents using grid-gap to reveal a color.

If the <b>s really chap your cerebellum, you can use generated content instead. When you generate before- and after-content pseudo-elements, Grid treats them as actual elements and makes them grid items. So using the simple markup used in the previous example, we could write this CSS instead:

#ttt {
	display: grid;
	grid-template-columns: repeat(3,5em);
	grid-template-rows: repeat(3,5em);
#ttt::before {
	grid-column: 1 / -1;
	grid-row: 2;
	border-width: 1px 0;
#ttt::after {
	grid-column: 2;
	grid-row: 1 / -1;
	border-width: 0 1px;

It?s the same as with the Filler <b>s, except here the generated elements draw the grid lines.

This approach works just fine for any 3x3 grid like the one we?ve been playing with, but to go any further, you?ll need to get more complicated. Suppose we have a 5x4 grid instead of a 3x3. Using gradients and repeating, we can draw as many lines as needed, at the cost of more complicated CSS.

#ttt {
	display: grid;
	grid-template-columns: repeat(5,5em);
	grid-template-rows: repeat(4,5em);
#ttt::before {
	content: "";
	grid-column: 1 / -1;
	grid-row: 1 / -2;
		linear-gradient(to bottom,transparent 4.95em, 4.95em, black 5em)
		top left / 5em 5em;
#ttt::after {
	content: "";
	grid-column: 1 / -2;
	grid-row: 1 / -1;
		linear-gradient(to right,transparent 4.95em, 4.95em, black 5em)
		top left / 5em 5em;

This works pretty well, as shown in Figure 7, assuming you go through the exercise of explicitly assigning the grid cells similar to how we did in #snippet9.

Screenshot: A 5 by 4 grid with evenly spaced borders dividing the cells internally using background gradients.
Figure 7: Generated elements and background gradients

This approach uses linear gradients to construct almost-entirely transparent images that have just a 1/20th of an em of black, and then repeating those either to the right or to the bottom. The downward gradient (which creates the horizontal lines) is stopped one gridline short of the bottom of the container, since otherwise there would be a horizontal line below the last row of items. Similarly, the rightward gradient (creating the vertical lines) stops one column short of the right edge. That?s why there are -2 values for grid-column and grid-row.

One downside of this is the same as the Filler <b> approach: since the generated elements are covering most of the background, all the items have to be explicitly assigned to their grid cells instead of letting them flow automatically. The only way around this is to use something like the overgridding technique explored earlier. You might even be able to drop the generated elements if you?re overgridding, depending on the specific situation.

Another downside is that if the font size ever changes, the width of the lines can change. I expect there?s a way around this problem using calc(), but I?ll leave that for you clever cogs to work out and share with the world.

The funny part to me is that if you do use this gradient-based approach, you?re filling images into the background of the container and placing items over that ? just as we did with Faux Columns.


It?s funny how some concepts echo through the years. More than a decade ago, Dan Cederholm showed us how to fake full-height columns with background images. Now I?m showing you how to fake full-length column and row boxes with empty elements and, when needed, background images.

Over time, the trick behind Faux Columns fell out of favor, and web design moved away from that kind of visual effect. Perhaps the same fate awaits Faux Grid Tracks, but I hope we see new CSS capabilities arise that allow this sort of effect without the need for trickery.

We?ve outgrown so many of our old tricks. Here?s another to use while it?s needed, and to hopefully one day leave behind.

Feedback That Gives Focus

I have harbored a lifelong dislike of feedback. I didn?t like it in sixth grade when a kid on the bus told me my brand new sneakers were ?too bright.? And I didn?t like it when a senior executive heard my pitch for a digital project and said, ?I hate this idea.? Turns out my sneakers were pretty bright, and my pitch wasn?t the best idea. Still, those experiences and many others like them didn?t help me learn to stop worrying and love the feedback process.

We can?t avoid feedback. Processing ideas and synthesizing feedback is a big part of what we do for a living. I have had plenty of opportunities to consider why both giving and receiving feedback is often so emotionally charged, so challenging to get right.

And here?s what I?ve found to be true.

When a project is preoccupying us at work, we often don?t think about it as something external and abstract. We think about it more like a story, with ourselves in the middle as the protagonist?the hero. That might seem melodramatic, especially if your work isn?t the kind of thing they?d make an inspirational movie about. But there?s research to back this up: humans use stories to make sense of the world and our place within it.

Our work is no different. We create a story in our heads about how far we?ve come on a project and about where we?re going. This makes discussing feedback dangerous. It?s the place where someone else swoops in and hijacks your story.

Speaking personally, I notice that when I?m giving feedback (and feeling frustrated), the story in my head goes like this: These people don?t get it. How can I force them into thinking the same way I do so that we can fix everything that?s wrong with this project, and in the end, I don?t feel like a failure?

Likewise, when I?m receiving feedback (and feeling defensive), the story goes like this: These people don?t get it. How can I defend our work so that we keep everything that I like about this project, and in the end, I don?t feel like a failure?

Both of these postures are ultimately counterproductive because they are focused inward. They?re really about avoiding shame. Both the person giving and receiving feedback are on opposing sides of the equation, protecting their turf.

But like a good story, good feedback can take us out of ourselves, allowing us to see the work more clearly. It can remove the artificial barrier between feedback giver and receiver, refocusing both on shared goals.

Change your habits around feedback, and you can change the story of your project.

Here are three ways to think about feedback that might help you do just that.

Good feedback helps us understand how we got here

Here?s a story for you. I was presenting some new wireframes for an app to the creative leads on the project. There were a number of stakeholders and advisors on the project, and I had integrated several rounds of their feedback into the harmonious and brilliant vision that I was presenting in this meeting. That?s the way I hoped the story would go, anyway.

But at the end of the meeting, I got some of the best, worst feedback I have ever received: ?We?ve gotten into our heads a little bit with this concept. Maybe it should be simpler. Maybe something more like this ?? And they handed me a loose sketch on paper to illustrate a new, simpler approach. I had come for sign-off but left with a do-over.

I felt ashamed. How could I have missed that? Even though that feedback was hard to hear, I walked away able to make important changes, which led to a better outcome in the end. Here are the reasons why:

First, the feedback started as a conversation. Conversations (rather than written notes) make it easier to verify assumptions. When you talk face-to-face you can ask open-ended questions and clarify intent, so you don?t jump to conclusions. Talking helps you find where the trouble is much faster.

The feedback connected the dots between problems in our process so far (trying to reconcile too many competing ideas) and how it led to the current result (an overly complicated product). The person who gave the feedback helped me see how we got to where we were, without assigning blame or shaming me in the process.

The feedback was direct. They didn?t try to mask the fact that the concept wasn?t working. Veiled or vague criticism does more harm than good; the same negativity comes through but without a clear sense of what to do next.

Good feedback invites each person to contribute their best work

No thought, no idea, can possibly be conveyed as an idea from one person to another. ? Only by wrestling with the conditions of the problem ? first hand ? does he think.
John Dewey, Democracy and Education

Here?s another story. I was the producer on an app-based game, and the team was working on a part of the user interface that the player would use again and again. I was convinced that the current design didn?t ?feel? right. I kept pushing for a change, against the input of others, and I gave the team some specific feedback about what I wanted to see done. The designers played along and tried it out. But it became clear that my feedback wasn?t helping, and the design director (gently) stepped in and steered us out of my design tangent and back on course.

John Dewey had it right in that quote above; you can?t think for someone else. And that?s exactly what I was doing: giving specific solutions without inviting the team to engage with the problem. And the results were worse for it.

It?s very tempting to use feedback to cajole and control people into doing things your way. But that usually leads to mediocre results. You have a team for a reason: you can?t possibly do everything on your own. Instead, when giving feedback try to remember that you?re building a team of individual contributors that will work together to make a better end product.

Here are a few feedback habits that help avoid the trap of using feedback to control, and instead, bring out the best in people.

Don’t give feedback until the timing is right

Feedback isn?t useful if it?s given before the work is really ready to be looked at. It?s also not useful to give feedback if you have not taken the time to look at the work and think about it in advance. If you rush either of these, the feedback will devolve into a debate about what could have been, rather than what?s actually there now. That invites confusion, defensiveness, and inefficiency.

Be just specific enough

Good feedback should have enough specifics to clearly identify the problem. But, usually, it?s better to not give a specific solution. The feedback in this example goes too far:

The background behind the menu items is a light blue on a darker blue. This makes it hard to see some options. Change the background fill to white and add a thin, red border around each square. When an option is selected, perhaps the inside border should glow red but not fill in all the way.

Instead, feedback that clearly identifies the problem is probably enough:

The background behind the menu items makes it a little hard for me to see some options. Any way we might make it easier to read?

Give the person whose job it is to solve the problem the room to do just that.  They might solve it in a better way that you hadn?t anticipated.

Admit when you?re wrong

When you acknowledge a mistake openly and without fear, it gives permission for others on the team to do the same. This refocuses energies away from ego-protection and toward problem solving. I chose to admit I got it wrong on that app project I mentioned above; the designers had it right and I told them I was glad they stuck to their guns. Saying that out loud was actually easier than I thought, and our working relationship was better for it.

Good feedback tells a story about the future

In my writing, as much as I could, I tried to find the good, and praise it.
Alex Haley

We?ve said that good feedback connects past assumptions and decisions to current results, without assigning blame. Good feedback also identifies issues in a timely and specific way, giving people room to find novel solutions and contribute their best work.

Lastly, I?ve found that most useful feedback helps us look beyond the present state of our work and builds a shared vision of where we?re headed.

One of maybe the most overlooked tools for building that shared vision is actually pretty simple: positive feedback. The best positive feedback acknowledges great work that?s already complete, doing so in a way that is future-focused.  Its purpose is to point out what we want to do more of as we move forward.

In practice, I?ve found that I can become stingy with positive feedback, especially when it?s early in a project and there?s so much work ahead of us. Maybe this is because I?m afraid that mentioning the good things will distract us from what?s still in need of improvement.

But ironically, the opposite is true: it becomes easier to fix what?s broken once you have something (however small) that you know is working well and that you can begin to build that larger vision around.

So be equally direct about what?s working as you are with what isn?t, and you?ll find it becomes easier to rally a team around a shared vision for the future.  The first signs of that future can be found right here in the present.

Like Mr. Haley said: find the good and praise it.

Oh and one more thing: say thank you.

Thank people for their contributions. Let me give that a try right now:

It seemed wise to get some feedback from others when writing about feedback. So thanks to everyone in the PBS KIDS family of producers who generously shared their thoughts and experience with me in preparation of this article. I look forward to hearing your feedback.

Planning for Accessibility

A note from the editors: We?re pleased to share an excerpt from Chapter 3 (?Planning for Accessibility") of Laura Kalbag's new book, Accessibility for Everyone, available now from A Book Apart.

Incorporating accessibility from the beginning is almost always easier, more effective, and less expensive than making accessibility improvements as a separate project. In fact, building accessibility into your project and processes has a wealth of business benefits. If you?re looking to make the case for accessibility?to yourself, to coworkers, or to bosses and clients?you might start here:

  • Findability and ease of use: In the broadest terms, accessibility can make it easier for anyone to find, access, and use a website successfully. By ensuring better usability for all, accessibility boosts a site?s effectiveness and increases its potential audience.
  • Competitive edge: The wider your audience, the greater your reach and commercial appeal. When a site is more accessible than other sites in the same market, it can lead to preferential treatment from people who struggled to use competitors? sites. If a site is translated, or has more simply written content that improves automated translation, increased accessibility can lead to a larger audience by reaching people who speak other languages.
  • Lower costs: Accessible websites can cut costs in other areas of a business. On a more accessible site, more customers can complete more tasks and transactions online, rather than needing to talk to a representative one-to-one.
  • Legal protection: In a few countries, an accessible site is required by law for organizations in certain sectors?and organizations with inaccessible sites can be sued for discrimination against people with disabilities.

Once you?ve made the case for incorporating accessibility into your work, the next step is to integrate an accessibility mindset into your processes. Include accessibility by default by giving accessibility proper consideration at every step in a product?s lifecycle.

Building Your Team

Web accessibility is the responsibility of everyone who has a hand in the design of a site. Design includes all of the decisions we make when we create a product?not just the pretty bits, but the decisions about how it works and who it?s for. This means everybody involved in the project is a designer of some sort.

Each specialist is responsible for a basic understanding of their work?s impact on accessibility, and on their colleagues? work. For example, independent consultant Anne Gibson says that information architects should keep an eye on the markup:

“We may or may not be responsible for writing the HTML, but if the developers we?re working with don?t produce semantic structure, then they?re not actually representing the structures that we?re building in our designs.”

Leadership and support

While we should all be attentive to how accessibility impacts our specialism, it?s important to have leadership to help determine priorities and key areas where the product?s overall accessibility needs improvement. Project manager Henny Swan (user experience and design lead at the Paciello Group, and previously of the BBC) recommends that accessibility be owned by product managers. The product managers must consider how web accessibility affects what the organization does, understand the organization?s legal duties, and consider the potential business benefits.

Sometimes people find themselves stuck within a company or team that doesn?t value accessibility. But armed with knowledge and expertise about accessibility, we can still do good work as individuals, and have a positive effect on the accessibility of a site. For example, a designer can ensure all the background and foreground text colors on their site are in good contrast, making text easier to distinguish and read.

Unfortunately, without the support and understanding of our colleagues, the accessibility of a site can easily be let down in other areas. While the colors could be accessible, if another designer has decided that the body text should be set at 12 pixels, the content will still be hard to read.

When accessibility is instituted as a company-wide practice, rather than merely observed by a few people within a team, it will inevitably be more successful. When everybody understands the importance of accessibility and their role in the project, we can make great websites.

Professional development

When you?re just starting to learn about accessibility, people in your organization will need to learn new skills and undertake training to do accessibility well.

Outside experts can often provide thorough training, with course material tailor-made to your organization. Teams can also develop their accessibility skills by learning the basics through web- and book-based research, and by attending relevant conferences and other events.

Both formal training and independent practice will cost time away from other work, but in return you?ll get rapid improvements in a team?s accessibility skills. New skills might mean initially slower site development and testing while people are still getting their heads around unfamiliar tools, techniques, and ways of thinking. Don?t be disheartened! It doesn?t take long for the regular practice of new skills to become second nature.

You might also need to hire in outside expertise to assist you in particular areas of accessibility?it?s worth considering the capabilities of your team during budgeting and decide whether additional training and help are needed. Especially when just starting out, many organizations hire consultants or new employees with accessibility expertise to help with research and testing.

When you?re trying to find the right expert for your organization?s needs, avoid just bashing ?accessibility expert? into a search engine and hoping for good luck. Accessibility blogs and informational websites (see the Resources section) are probably the best place to start, as you can often find individuals and organizations who are great at teaching and communicating accessibility. The people who run accessibility websites often provide consultancy services, or will have recommendations for the best people they know.

Scoping the Project

At the beginning of a project, you?ll need to make many decisions that will have an impact on accessibility efforts and approaches, including:

  • What is the purpose of your product?
  • Who are the target audiences for your product? What are their needs, restrictions, and technology preferences?
  • What are the goals and tasks that your product enables the user to complete?
  • What is the experience your product should provide for each combination of user group and user goal?
  • How can accessibility be integrated during production?
  • Which target platforms, browsers, operating systems and assistive technologies should you test the product on?

If you have answers to these questions?possibly recorded more formally in an accessibility policy (which we?ll look at later in this chapter)?you?ll have something to refer to when making design decisions throughout the creation and maintenance of the product.

Keep in mind that rigid initial specifications and proposals can cause problems when a project involves research and iterative design. Being flexible during the creation of a product will allow you to make decisions based on new information, respond to any issues that arise during testing, and ensure that the launched product genuinely meets people?s needs.

If you?re hiring someone outside your organization to produce your site, you need to convey the importance of accessibility to the project. Whether you?re a project manager writing requirements, a creative agency writing a brief, or a freelance consultant scoping your intent, making accessibility a requirement will ensure there?s no ambiguity. Documenting your success criteria and sharing it with other people can help everyone understand your aims, both inside and outside your organization.


Accessibility isn?t a line item in an estimate or a budget?it?s an underlying practice that affects every aspect of a project.

Building an accessible site doesn?t necessarily cost more money or time than an inaccessible site, but some of the costs are different: it costs money to train your team or build alternative materials like transcripts or translations. It?s wise to consider all potential costs from the beginning and factor them into the product budget so they?re not a surprise or considered an ?extra cost? when they could benefit a wide audience. You wouldn?t add a line item to make a site performant, so don?t do it for accessibility either.

If you?ve got a very small budget, rather than picking and choosing particular elements that leave some users out in favor of others, consider the least expensive options that enable the widest possible audience to access your site. For example, making a carousel that can be manipulated using only the keyboard will only benefit people using keyboard navigation. On the other hand, designing a simpler interface without a carousel will benefit everyone using the site.

Ultimately, the cost of accessibility depends on the size of the project, team, and whether you?re retrofitting an existing product or creating a new product. The more projects you work on, the better you?ll be able to estimate the impact and costs of accessibility.

Want to read more?

This excerpt from Accessibility for Everyone will help you get started. Order the full copy today, as well as other excellent titles from A Book Apart.

Cover from Accessibility for Everyone

Ten Extras for Great API Documentation

If you manage to create amazing API documentation and ensure that developers have a positive experience implementing your API, they will sing the praises of your product. Continuously improving your API documentation is an investment, but it can have a huge impact. Great documentation builds trust, differentiates you from your competition, and provides marketing value.

I?ve shared some best practices for creating good API documentation in my article ?The Ten Essentials for Good API Documentation.? In this article, I delve into some research studies and show how you can both improve and fine-tune different aspects of your API documentation. Some of these extras, like readability, are closer to essentials, while others are more of a nice-to-have, like personality. I hope they give you some ideas for building the best possible docs for your product.

Overview page

Whoever visits your API documentation needs to be able to decide at first glance whether it is worth exploring further. You should clearly show:

  • what your API offers (i.e., what your products do);
  • how it works;
  • how it integrates;
  • and how it scales (i.e., usage limits, pricing, support, and SLAs).
Screenshot: The homepage of Spotify's API documentation.
Spotify?s API documentation clearly states what the API does and how it works, and it provides links to guides and API references organized in categories.

An overview page targets all visitors, but it is especially helpful for decision-makers. They have to see the business value: explain to them directly why a company would want to use your API.

Developers, on the other hand, want to understand the purpose of the API and its feature set, so they tend to turn to the overview page for conceptual information. Show them the architecture of your API and the structure of your docs. Include an overview of different components and an introduction into the request-response behavior (i.e., how to integrate, how to send requests, and how to process responses). Provide information on the platforms on which the API is running (e.g., Java) and possible interactions with other platforms.

As the study ?The role of conceptual knowledge in API usability? found, without conceptual knowledge, developers struggle to formulate effective queries and to evaluate the relevance or meaning of content they find. That?s why API documentation should not only include detailed examples of API use, but also thorough introductions to the concepts, standards, and ideas in an API’s data structures and functionality. The overview page can be an important component to fulfill this role.

Screenshot: Braintree's API overview page has an illustration showing how it works.
Braintree?s API overview page provides a clear overview of their SDKs, along with a visual step-by-step explanation of how their API works.


For some developers, examples play a more important role in getting started with an API than the explanations of calls and parameters.

A recent study, ?Application Programming Interface Documentation?What Do Software Developers Want?,? explored how software developers interact with API documentation: what their goals are, how they learn, where they look for information, and how they judge the quality of API docs.

The role of examples

The study found that after conducting an initial overview of what the API does and how it works, developers approach learning about the API in two distinct ways: some follow a top-down approach, where they try to build a thorough understanding of the API before starting to implement specific use cases, while others prefer to follow a bottom-up approach, where they start coding right away.

This latter group has a code-oriented learning strategy; they start learning by trying and extending code examples. Getting into an API is most often connected with a specific task. They look for an example that has the potential of serving as a basis to solve their problem, but once they?ve found the solution they were looking for, they usually stop learning.

Examples are essential for code-oriented learners, but all developers benefit from them. The study showed that developers often trust examples more than documentation, because if they work, they can?t be outdated or wrong. Developers often struggle with finding out where to start and how to begin with a new API?examples can become good entry points in this case. Many developers can grasp information more easily from code than text, and they can re-use code in examples for their own implementation. Examples also play other roles that are far from obvious: they automatically convey information about dependencies and prerequisites, they help identify relevant sections in the documentation when developers are scanning the page, and they intuitively show developers how code that uses the API should look.

Improve your examples

Because examples are such a crucial component in API documentation, better examples mean better docs.

To ensure the quality of your examples, they should be complete, be programmed professionally, and work correctly. Because examples convey so much more than the actual use case, make sure to follow the style guidelines of the respective community and show best-practice approaches. Add brief, informative explanations; although examples can be self-explanatory, comments included with sample code help comprehension.

Add concrete, real-life examples whenever you can. If you don?t have real examples, make sure they at least look real: use realistic variable names and functions instead of abstract ones.

When including examples, you have a variety of formats and approaches to choose from: auto-generated examples, sample applications, client libraries, and examples in multiple languages.

Auto-generated examples

Autodoc tools like Swagger Codegen and API Blueprint automatically generate documentation from your source code and keep it up-to-date as the code changes. Use them to generate reference libraries and sample code snippets, but be aware that what you produce this way is only skeleton?not fleshed out?documentation. You will still have to add explanations, conceptual information, quick-start guides, and tutorials, and you should still pay attention to other aspects like UX and good-quality copy.

On the Readme blog, they explore autodoc tools and their limitations in more depth through a couple of real-world examples.

Sample applications

Working applications that use the API are a great way to show how everything works together and how the API integrates with different platforms and technologies. They are different than sample code snippets, because they are stand-alone solutions that show the big picture. As such, they are helpful to developers who would like to see how a full implementation works and to have an overall understanding of how everything in the API ties together. On the other hand, they are real products that showcase the services and quality of your API to decision makers. Apple?s iOS Developer Portal offers buildable, executable source examples of how to accomplish a task using a particular technology in a wide variety of categories.   

Client libraries

Client libraries are chunks of code that developers can add to their own development projects. They are usually available in various programming languages, and cover basic functionality for an application to be able to interact with the API. Providing them is an extra feature that requires ongoing investment from the API provider, but doing so helps developers jump-start their use of the API. GitHub follows the practical approach of offering client libraries for the languages that are used the most with their API, while linking to unsupported, community-built libraries written in other, less popular languages.

Examples in multiple languages

APIs are platform- and language-independent by nature. Developers can use an API?s services with the language of their choice, but this means good documentation should cover at least the most popular languages used with that particular API (e.g., C#, Java, JavaScript, Go, Objective-C, PHP, Python, Ruby, and Swift). Not only should you provide sample code and sample applications in different languages, but also these samples should reflect the best-practice approach for each language.


API documentation is a tool that helps developers and other stakeholders do their job. You should adapt it to the way people use it, and make it as easy to use as possible. Consider the following factors:

  • Copy and paste: Developers copy and paste code examples to use them as a starting point for their own implementation. Make this process easier with either a copy button next to relevant sections or by making sections easy to highlight and copy.
  • Sticky navigation: When implemented well, fixing the table of contents and other navigation to the page view can prevent users from getting lost and having to scroll back up.
  • Clicking: Minimize clicking by keeping related topics close to each other.
  • Language selector: Developers should be able to see examples in the language of their choice. Put a language selector above the code examples section, and make sure the page remembers what language the user has selected.
  • URLs: Single page views can result in very long pages, so make sure people can link to certain sections of the page. If, however, a single page view doesn?t work for your docs, don?t sweat it: just break different sections into separate pages.
    Screenshot: A specific section of the Stripe API documents with the location bar showing that the URL has changed.
    Great usability: Stripe?s API documentation changes the URL dynamically as you scroll through the page.

    Another best practice from Stripe: the language selector also changes the URL, so URLs link to the right location in the right language.

  • Collaboration: Consider allowing users to contribute to your docs. If you see your users edit your documentation, it indicates there might be room for improvement?in those parts of your docs or even in your code. Additionally, your users will see that issues are addressed and the documentation is frequently updated. One way to facilitate collaboration is to host your documentation on GitHub, but be aware that this will limit your options of interactivity, as GitHub hosts static files.


Providing an option for users to interact with your API through the documentation will greatly improve the developer experience and speed up learning.

First, provide a working test API key or, even better, let your users log in to your documentation site and insert their own API key into sample commands and code. This way they can copy, paste, and run the code right away.

As a next step, allow your users to make API calls directly from the site itself. For example, let them query a sample database, modify their queries, and see the results of these changes.

A more sophisticated way to make your documentation more interactive is by providing a sandbox?a controlled environment where users can test calls and functions against known resources, manipulating data in real-time. Developers learn through the experience of interacting with your API in the sandbox, rather than by switching between reading your docs and trying out code examples themselves. Nordic APIs explains the advantages of sandboxing, discusses the role of documentation in a sandboxed environment, and shows a possible implementation. To see a sandbox in action, try out the one on Dwolla?s developer site.


The study exploring how software developers interact with API documentation also explored how developers look for help. In a natural work environment, they usually turn to their colleagues first. Then, however, many of them tend to search the web for answers instead of consulting the official product documentation. This means you should ensure your API documentation is optimized for search engines and will turn up relevant results in search queries.

To make sure you have the necessary content available for self-support, include FAQs and a well-organized knowledge base. For quick help and human interaction, provide a contact form, and?if you have the capacity?a help-desk solution right from your docs, e.g., a live chat with support staff.

The study also pointed at the significant role Stack Overflow plays: most developers interviewed mentioned the site as a reliable source of self-help. You can also support your developers? self-help efforts and sense of community by adding your own developer forum to your developer portal or by providing an IRC or Slack channel.


As with all software, APIs change and are regularly updated with new features, bug fixes, and performance improvements.

When a new version of your API comes out, you have to inform the developers working with your API about the changes so they can react to them accordingly. A changelog, also called release notes, includes information about current and previous versions, usually ordered by date and version number, along with associated changes.

If there are changes in a new version that can break old use of an API, put warnings on top of relevant changelogs, even on top of your release notes page. You can also bring attention to these changes by highlighting or marking them permanently.

To keep developers in the loop, offer an RSS feed or newsletter subscription where they can be notified of updates to your API.

Besides the practical aspect, a changelog also serves as a trust signal that the API and its documentation are actively maintained, and that the information included is up-to-date.

Analytics and feedback

You can do some research by getting to know your current and potential clients, talking to people at conferences, exploring your competition, and even conducting surveys. Still, you will have to go with a lot of assumptions when you first build your API docs.

When your docs are up, however, you can start collecting usage data and feedback to learn how you can improve them.

Find out about the most popular use cases through analytics. See which endpoints are used the most and make sure to prioritize them when working on your documentation. Get ideas for tutorials, and see which use cases you haven?t covered yet with a step-by-step walkthrough from developer community sites like Stack Overflow or your own developer forums. If a question regarding your API pops up on these channels and you see people actively discussing the topic, you should check if it?s something that you need to explain in your documentation.

Collect information from your support team. Why do your users contact them? Are there recurring questions that they can?t find answers for in the docs? Improve your documentation based on feedback from your support team and see if you have been successful: have users stopped asking the questions you answered?

Listen to feedback and evaluate how you could improve your docs based on them. Feedback can come through many different channels: workshops, trainings, blog posts and comments about your API, conferences, interviews with clients, or usability studies.


Readability is a measure of how easily a reader can understand a written text?it includes visual factors like the look of fonts, colors, and contrast, and contextual factors like the length of sentences, wording, and jargon. People consult documentation to learn something new or to solve a problem. Don?t make the process harder for them with text that is difficult to understand.

While generally you should aim for clarity and brevity from the get-go, there are some specific aspects you can work on to improve the readability of your API docs.

Audience: Expect that not all of your users will be developers and that even developers will have vastly different skills and background knowledge about your API and business domain. To cater to the different needs of different groups in your target audience, explain everything in detail, but provide ways for people already familiar with the functionality to quickly find what they are looking for, e.g., add a logically organized quick reference.

Wording: Explain everything as simply as you can. Use short sentences, and make sure to be consistent with labels, menu names, and other textual content. Include a clear, straightforward explanation for each call. Avoid jargon if possible, and if not, link to domain-related definitions the first time you use them. This way you can make sure that people unfamiliar with your business domain get the help they need to understand your API.

Fonts: Both the font size and the font type influence readability. Have short section titles and use title case to make it easier to scan them. For longer text, use sans serif fonts. In print, serif fonts make reading easier, but online, serif characters can blur together. Opt for fonts like Arial, Helvetica, Trebuchet, Lucida Sans, or Verdana, which was designed specifically for the web. Contrast plays an important role as well: the higher the contrast, the easier the text is to read. Consider using a slightly larger font size and different typeface for code than for text to help your users? visual orientation when switching back and forth between their code editor and your documentation.

Structure: API documentation should cater to newcomers and returning visitors alike (e.g., developers debugging their implementation). A logical structure that is easy to navigate and that allows for quick reference works for both. Have a clear table of contents and an organized list of resources, and make sections, subsections, error cases, and display states directly linkable.

Screenshot: When the cursor hovers over specific arguments in Stripe's API, a linked icon appears.
Great usability: Linkability demonstrated on Stripe?s API documentation.

Scannability: As Steve Krug claims in his book about web usability, Don?t Make Me Think, one of the most important facts about web users is that they don?t read, they scan. To make text easier to scan, use short paragraphs, highlight relevant keywords, and use lists where applicable.

Accessibility: Strive to make your API docs accessible to all users, including users who access your documentation through assistive technology (e.g., screen readers). Be aware that screen readers may often struggle with reading code and may handle navigation differently, so explore how screen readers read content. Learn more about web accessibility, study Web Content Accessibility Guidelines, and do your best to adhere to them.


You?ve worked hard to get to know your audience and follow best practices to leave a good impression with your API docs. Now, as a finishing touch, you can make sure your docs ?sound? and look in tune with your brand.

Although API documentation and technical writing in general don?t provide much room for experimentation in tone and style, you can still instill some personality into your docs:

  • Use your brand book and make sure your API docs follow it to the letter.
  • A friendly tone and simple style can work wonders. Remember, people are here to learn about your API or solve a problem. Help them by talking to them in a natural manner that is easy to understand.
  • Add illustrations that help your readers understand any part of your API. Show how different parts relate to each other; visualize concepts and processes.
  • Select your examples carefully so that they reflect on your product the way you want them to. Playful implementations of your API will create a different impression than more serious or enterprise use cases. If your brand allows, you can even have some fun with examples (e.g., funny examples and variable names), but don?t go overboard.
  • You can insert some images (beyond illustrations) where applicable, but make sure they add something to your docs and don?t distract readers.

Think developer portal?and beyond

Although where you draw the line between API documentation and developer portal is still up for debate, most people working in technical communication seem to agree that a developer portal is an extension of API documentation. Originally, API documentation meant strictly the reference docs only, but then examples, tutorials, and guides for getting started became part of the package; yet we still called them API docs. As the market for developer communication grows, providers strive to extend the developer experience beyond API documentation to a full-fledged developer portal.

In fact, some of the ideas discussed above?like a developer forum or sandboxes?already point in the direction of building a developer portal. A developer portal is the next step in developer communication, where besides giving developers all the support they need, you start building a community. Developer portals can include support beyond docs, like a blog or videos. If it fits into the business model, they can also contain an app store where developers submit their implementations and the store provides a framework for them to manage the whole sales process. Portals connected to an API often also contain a separate area with landing pages and showcases where you can directly address other stakeholders, such as sales and marketing.

Even if you?re well into building your developer portal, you can still find ways to learn more and extend your reach. Attend and present at conferences like DevRelCon, Write The Docs or API The Docs to get involved in developer relations. Use social media: tweet, join group discussions, or send a newsletter. Explore the annual Stack Overflow Developer Survey to learn more about your main target audience. Organize code and documentation sprints, trainings, and workshops. Zapier has a great collection of blogs and other resources you can follow to keep up with the ever-changing API economy?you will surely find your own sources of inspiration as well.

I hope ?The Ten Essentials for Good API Documentation? and this article gave you valuable insight into creating and improving your API documentation and inspire you to get started or keep going.

What the Failure of New Coke Can Teach Us About User Research And Design

In the late 1970s, Pepsi was running behind Coca-Cola in the competition to be the leading cola. But then Pepsi discovered that in blind taste tests, people actually preferred the sweeter taste of Pepsi. To spread the word, Pepsi ran a famous advertising campaign, called the Pepsi Challenge, which showed people tasting the two brands of cola while not knowing which was which. They chose Pepsi every time.

As Pepsi steadily gained market share in the early 1980s, Coca-Cola ran the same test and found the same result?people simply preferred Pepsi when tasting the two side by side. So, after conducting extensive market research, Coca-Cola?s solution was to create a sweeter version of its famous cola?New Coke. In taste tests, people preferred the new formula of Coke to both the regular Coke formula and to Pepsi.

Despite this success in tests, when the company brought New Coke to market, customers revolted. New Coke turned out to be one of the biggest blunders in marketing history. Within months, Coke returned its original formula?branded as ?Coca-Cola Classic??to the shelves.

In the end, sales showed that people preferred Coke Classic. But Coca-Cola?s research predicted just the opposite. So what went wrong?

The tests had people drink one or two sips of each cola in isolation and then decide which they preferred based on that. The problem is, that’s not how people drink cola in real life. We might have a can with a meal. And we almost never drink just one or two sips. User research is just as much about the way the research is conducted as it is about the product being researched.

For the purposes of designing and researching digital services and websites, the point is that people can behave differently in user research than they do in real life. We need to be conscious of the way we design and run user research sessions and the way we interpret the results to take real-life behavior into account?and avoid interpretations that lead to a lot of unnecessary work and a negative impact on the user experience.

To show how this applies to web design, I?d like to share three examples taken from a project I worked on. The project was for a government digital service that civil servants use to book and manage appointments. The service would replace a third-party booking system. We were concerned with three user needs:

  • booking an appointment;
  • viewing the day?s appointments;
  • and canceling an appointment.

Booking an appointment

We needed to give users a way to book an appointment, which consisted of selecting a location, an appointment type, and a person to see. The order of these fields matters: not all appointment types can be conducted at every location, and, not all personnel are trained to conduct every appointment type.

Screenshot of an app to book an appointment
The first iteration of the booking journey, with three select boxes in one page.

Our initial design had three select boxes in one page. Selecting an option in the first select box would cause the values in the subsequent boxes to be updated, but because it was just a prototype we didn?t build this into the test. Users selected an option from each of the select boxes easily and quickly. But afterwards, we realized that the test didn?t really reflect how the interface would actually work.

In reality, the select boxes would need to be updated dynamically with AJAX, which would slow things down drastically and affect the overall experience. We would also need a way to indicate that something was loading?like a loading spinner. This feedback would also need to be perceivable to visually-impaired users relying on a screen reader.

That?s not all: each select box would need to have a submit button because submitting a form onchange is an inclusive design anti-pattern. This would also cover scenarios where there is a JavaScript failure, otherwise, users would be left with a broken interface. With that said, we weren?t thrilled with the idea of adding more submit buttons. One call to action is often simpler and clearer.

As mentioned earlier, the order in which users select options matters, because completing each step causes the subsequent steps to be updated. For production, if the user selected options in the wrong order, things could break. However, the prototype didn?t reflect this at all?users could select anything, in any order, and proceed regardless.

Users loved the prototype, but it wasn?t something we could actually give them in the end. To test this fairly and realistically, we would need to do a lot of extra work. What looked innocently like a simple prototype gave us misleading results.

Our next iteration followed the One Thing Per Page pattern; we split out each form field into a separate screen. There was no need for AJAX, and each page had a single submit button. This also stopped users from answering questions in the wrong order. As there was no longer a need for AJAX, the related accessibility considerations went away too.

Screenshot showing an app to book appointments split across three screens
The second iteration of the booking journey, with a separate page for each step.

This tested really well. The difference was that we knew the prototype was realistic, meaning users would get a similar experience when the feature went into production.

Viewing the day?s appointments

We needed to give users a way to view their schedule. We laid out the appointments in a table, where each row represented an appointment. Any available time was demarcated by the word ?Available.? Appointments were linked, but available times were not.

Screenshot of an app view to display the day's appointments
The schedule page to view the day?s appointments.

In the first round of research, we asked users to look at the screen and give feedback. They told us what they liked, what they didn?t, and what they would change. Some participants told us they wanted their availability to stand out more. Others said they wanted color-coded appointment types. One participant even said the screen looked boring.

During the debrief, we realized they wanted color-coded appointments because the booking system (to which they had become accustomed) had them. However, the reason they used color for appointments was that the system?s layout squeezed so much information into the screen that it was hard to garner any useful information from it otherwise.

We weren?t convinced that the feedback was valuable. Accommodating these changes would have meant breaking existing patterns, which was something we didn?t want to do without being sure.

We also weren?t happy about making availability more prominent, as this would make the appointments visually weaker. That is, fixing this problem could inadvertently end up creating another, equally weighted problem. We wanted to let the content do the work instead.

The real problem, we thought, was asking users their opinion first, instead of giving them tasks to complete. People can be resistant to change, and the questions we asked were about their opinion, not about how to accomplish what they need to do. Ask anyone their opinion and they?ll have one. Like the Coca-Cola and Pepsi taste tests, what people feel and say in user research can be quite different than how they behave in real life.

So we tested the same design again. But this time, we started each session by asking users questions that the schedule page should be able to answer. For example, we asked ?Can you tell me when you?re next available?? and ?What appointment do you have at 4 p.m.??

Users looked at the screen and answered each question instantly. Only afterward did we ask users how they felt about it. Naturally, they were happy?and they made no comments that would require major changes. Somewhat amusingly, this time one participant said they wanted their availability to be less prominent because they didn?t want their manager seeing they had free time.

If we hadn?t changed our approach to research, we might have spent a lot of time designing something new that would have had no value for users.

Canceling an appointment

The last feature involved giving users a way to cancel an appointment. As we were transitioning away from using the third-party system, there was one situation where an appointment could have been booked in both that system and the application?the details of which don?t really matter. What is important is that we asked users to confirm they understood what they needed to do.

Screenshot showing an app page to confirm a cancellation
The confirm cancellation page.

The first research session had five participants. One of those participants read the prompt but missed the checkbox and proceeded to submit the form. At that point, the user was taken to the next screen.

We might have been tempted to explore ways to make the checkbox more prominent, which in theory would reduce the chance of users missing it. But then again, the checkbox pattern was used across the service and had gone through many rounds of usability and accessibility testing?we knew that the visual design of the checkbox wasn?t at fault.

The problem was that the prototype didn?t have form validation. In production, users would see an error message, which would stop them from proceeding. We could have spent time adding form validation, but there is a balancing act between the speed in which you want to create a throwaway prototype and having that prototype give you accurate and useful results.


Coca-Cola wanted its world-famous cola to test better than Pepsi. As soon as tests showed that people preferred its new formula, Coca-Cola ran with it. But like the design of the schedule page, it wasn?t the product that was wrong, it was the research.

Although we weren?t in danger of making the marketing misstep of the century, the design of our tests could have influenced our interpretation of the results in such a way that it would have created a lot more work for a negative return. That?s a lot of wasted time and a lot of wasted money.

Time with users is precious: we should put as much effort and thought into the way we run research sessions as we do with designing the experience. That way users get the best experience and we avoid doing unnecessary work.