Keyboard support means you have the freedom to use your hardware in the way that is most efficient and effective for you, which is really the whole point of inclusive design. But how do we get to keyboard accessibility for touch interfaces?

At the beginning of the mobile age, accessing a website on your phone or tablet often meant finding a message cheerfully advising you to look at the site on a “real” computer to give you the full experience you deserved. Thankfully, the days of device siloing are behind us, but that doesn’t mean that we don’t face challenges on touch devices. People increasingly don’t step away from their phone or tablet to do “real” work at a desktop or laptop computer. It’s not uncommon anymore for individuals to use a touch device with a hardware keyboard, and using hybrid devices is also on the rise. If we think beyond standard touch gestures, we help users of all kinds work more effectively and efficiently.

By having keyboard or keyboard-equivalent support, you can use your tablet with a bluetooth keyboard when you travel, switch seamlessly between “touch” and “keyboard” interactions on a hybrid device, or use another input device, like a joystick, with a screen reader or other assistive technology.

Keyboard support means you have the freedom to use your hardware in the way that is most efficient and effective for you, which is really the whole point of inclusive design.

Options are better for everyone

But how do we get to keyboard accessibility for touch interfaces? At first glance, WCAG 2.0 keyboard support recommendations for mobile may not seem like true mobile guidelines. However, if you look at the goal of the guidelines (make things work at their most basic level), they offer a solid benchmark for bringing interaction accessibility to touch devices like phones and tablets.

As the line between desktop and mobile continues to blur, it’s not just about pushing buttons on a literal keyboard, but a more fundamental understanding of how keyboard-based interactions benefit users who are more device agnostic as well as those who rely on assistive supports.

When teams first start to address accessibility, solving for screen readers seems like the biggest technical challenge. Solving for that can feel like you’ve solved for everything else, but that isn’t the case. Supporting keyboards makes a bigger accessibility impact.

Lucky for us, the principles that WCAG 2.0 provides for keyboard support can fit into the larger UX picture for supporting interactions across devices, creating experiences that are grounded in how users want to play and work in ways that best meet their individual preferences and needs.

On the desktop, keyboard interactions provide crucial support for users who can’t use a mouse, including those who use screen readers and refreshable braille devices. Even without a literal hardware keyboard, it’s not a huge leap to think about default touch gestures mapping to mouse interactions and non-touch inputs mapping to keyboard interactions. More than this, the style of interaction that keyboard support implies actually provides a framework for thinking about designing logical workflows that make all users’ lives easier, and it makes your product stronger, faster, and better.

Keyboard support helps create clear workflows

For UX and visual design work, thinking about keyboard support for touch devices means designing the user experience by clearly defining what the user needs to accomplish. If you’re considering keyboard support for touch devices, it means you’re also being flexible in how the UI meets people’s needs. To keep things simple, starting with native web or mobile OS controls lets you mix and match to build the features you need.

Consider the following guidelines when designing touch-based interaction that works for everyone:

Break tasks down into micro-tasks

Within a given task, what is each individual step of an interaction? How can a user find, return to, reset, or edit an interaction from elsewhere in the workflow? How can someone understand how a micro-task or micro-interaction fits in with the whole? If a task is a statement, what are the nouns, the verbs, the adjectives that make up the content of the feature? Break each interaction into smaller parts to understand what you’re asking of users.

Icon depicting a shape broken down into smaller parts

Find the order of micro-tasks within a workflow

For every feature, there’s an order of operations that someone needs to complete in the correct sequence to be successful, whether it’s buying a ticket, sending an email, or looking up a piece of information. This order, when designed well, helps prevent mistakes and minimizes confusion for your users, and it also prevents you from designing unnecessarily complex workflows. The goal is to simplify things for yourself and your users. Find that simple order of operations for taskflows.

Icon depicting workflow order using a cone with ice cream added to it

Match each micro-task to a native UI element or behaviour

Once you’ve identified the micro-tasks and designed a simple workflow, tie each task to an element that makes sense for users using a variety of inputs. Usually, when you use native elements according to how they are specified, hardware and software will play nicely and behave as expected for as many users as possible. When you know exactly how someone will complete an action along with the input methods that they may use (whether touch, mouse, keyboard, switch control, etc.), you can tie different elements together into a complete workflow that accounts for all user scenarios. Being able to tie a specific interaction to a specific native element means you can spend your time designing the interaction, not worrying about how a user will know what to do.

Icon depicting matching using shapes

Tell users what to do

It’s important to consider how people will know what to do with each task in your workflow. What interactions are “intuitive” for touch users? Which interactions are less common, and less discoverable? What does intuitive mean on a touch device if you can’t touch the screen, and use a customized input device like a joystick or switch? And, if an interaction isn’t standard, how will you communicate with your user? If users make a mistake, how do they know that something has gone wrong, and how can they fix it?

Icon providing the instructions Order Here

Find the interactions that take place between each element

Just like glissandos between music notes, interactions have invisible connections that let users move gracefully from one task to the next. How will someone using a particular input type navigate between elements? Do the elements and behaviors you strung together still make sense? Knowing how the rest of your site or app works, do you anticipate any inconsistencies or functionality that might trip people up? And, once you’ve found those connections, can you verify that a user can get between them with a keyboard?

Icon showing the connections between musical notes

Test and iterate

Test different combinations with real people to learn how to design easy-to-understand interactions. Can the interaction be broken into even smaller pieces? If so, should the micro-tasks go in a different order? If, after a few tries, no other option works, it’s possible that you need a custom element or interaction, but it’s important to give native options a shot first.

Icon depicting iteration using different combinations of the previous icons

Beyond the keyboard

Keyboard support probably isn’t the best description for an increasingly hybridized and personalized interaction (but it’s way shorter than “however it is you get around and interact with the page, you there, yes, you specifically”). But, the purpose of WCAG 2.0 is to give us a framework for presenting and implementing content and interactions in a way that helps all kinds of users the most, with criteria for evaluating how successful we are at meeting those goals.

Should those criteria be updated to meet the vast number of ways that people interact with devices now? Sure. Can we take the intent of the guidelines as a framework for embracing a universal design approach to interaction? Absolutely.

3 thoughts on “Supporting the keyboard for mobile”

Read comments

  1. Also worth noting that keyboard support should be taken to its minimal, lowest common denominator: can a user set focus to something and activate it. I’ve recently come across quite a few instances where something was keyboard-accessible in the sense of following ARIA 1.0 design patterns and recommended keyboard shortcuts etc. to the letter, yet failed miserably when moving to touchscreen devices with AT. See the note here http://w3c.github.io/aria-in-html/#aria-touch

    Also worth noting that, depending on platform, even external keyboards don’t actually send key events all the time – on iOS for instance, external keyboards only “work” (fire JS events) in the same situation as when the on-screen keyboard would normally show up. so again, even in this situation there needs to be an understanding of what events one can reliably listen for.

    1. Devon Persing says:

      Excellent points, Patrick! We do sometimes find that patterns that are “technically” okay and recommended do not fare well in usability testing, for exactly the types of reasons you mention. Often the solution is a much stripped down version that uses native controls to do to the heavy lifting, with occasional JavaScript to manage special events.

Hi there! We've closed the comments after a week of spirited discussion on this post. If there's something we've missed, please reach out and let us know.