Ta(l)king a big step

Improving the accessibility of our iOS catalog

Giovani Pereira
iFood Engineering

--

Working on a big project means handling all sorts of issues, but also, having the opportunity to impact a large and diverse number of people. Here on iFood we have an environment thriving on amazing projects, incredible ideas, and the space to create and test new features in order to make the best product possible.

Most of the time, we focus on how we can create layouts in order to make the user experience easy, understandable, and also pleasant. But, have you ever wondered, who can really use your app?

We want to be able to reach our design expectations and, at the same time, provide the best experience to all of our users. This represents accounting for disabilities or anything that could prevent a user to fully enjoy our product. This means studying, planning, and working with accessibility features to continuously improve their experience as well.

The goal here is to show you some of the improvements we've recently made on our app’s catalog accessibility using the voice-over.

Voice-over

When we talk about accessibility, we have a huge specter of features and disabilities (color blindness, visual impairments, motion deficiencies…), but here we’ll focus on iOS’s Voice-over: a screen reader built-in natively on every iOS device which allows navigating and interacting with the elements on the screen.

It doesn’t mean that we are overlooking other types of disabilities. Over other instances, we discuss different topics, for instance, our design team keeps studies on color contrast to ensure our content is visible for colorblind users.

Also, iOS’s voice-over is a very powerful tool, but it can have some pitfalls, especially when working with complicated layouts, animations, and multiple types of interactions. Here, we’ll discuss a little bit how we overcame some of these issues using a few of the many voice-over features in order to improve our usability for visually impaired users.

Let's take a look at the basics of the voice-over before we dig deeper into our catalog's accessibility.

Navigation

The voice-over works by focusing on items. If you have your device's display enabled while using the voice-over, the focused item will be highlighted with a black border around it.

A carousel of items under a search bar. The second item of the carousel is focused on by the screen reader.
Example of screen reading. The second item of the carousel is currently focused on.

When focused, the voice-over will read the view’s accessibility label, a description of what that content means, or what is actually written onscreen. This label can be static or provided programmatically in runtime.

To navigate, or focus on other elements, swipe right or left anywhere on the screen, and the reader will move to the next or previous element, respectively.

Activation

The activation would be the equivalent of tapping on the screen to interact with a button. To perform it, the user double-taps anywhere and it will execute the activation method for the focused view.

Not every view has an activation method, most of the time, like common Labels, they are just readable elements.

Traits

Accessibility traits are properties used to give more information about what that element can perform. The most common one is the button trait, which indicates that an element can be activated, and will perform an action.

Other traits can change the way the user interacts with the elements providing new gestures or audible cues.

Hints

Accessibility Hints are a useful way to tell the user about how they should interact with your features. They consist of additional text spoken at the end of the accessibility labels and usually describe extra gestures or actions that can be performed on the focused item.

But just like visual hints (icons, shakes, or animations), audible hints should be kept short, easy to understand, and should not disturb the navigation.

With that in mind, we can dive into our catalog feature and talk more about our improvements.

The catalog

The iFood app is, essentially, a marketplace where you can order food, groceries, pharmacies, and all sorts of things. So, catalogs are a vital part of our ecosystem, they allow finding the items you need, check for descriptions, compare prices, understand more about what you are going to buy. But it should also be fast and efficient to find the items you need and proceed to purchase.

The Quick-add

The quick-add is a capability present in our grocery catalogs. With a single tap, the user can add the item directly to the shopping cart, change its quantity, and keep navigating through other items.

It’s a very quick action, mainly because it does not require the user to leave the catalog's context to perform it.

This video displays the quick add feature. With a single tap, you can add the item and change its quantity in the same context.

Our main issue here was that this particular capability was not accessible, if a voice-over user wanted to add the item to purchase, they would need to open the item’s description, navigate up to the add button, and just then, adjust its quantity.

This could not go any longer, we had to fix it.

Making it accessible

To improve the quick-add, we decided to use the adjustable trait on the cell. This accessibility trait gives the view a new superpower when focused by the screen reader, the user can now swipe up and down on the screen and it will call two separate methods on the view, depending on the gesture direction, allowing us to use these methods and perform actions.

It's also a common trait used for counters or sliders, and just what we need for our cell.

Example code to configure the adjustable trait.

You'll need to override the accessibilityIncrement and the accessibilityDecrement methods and make the changes you need on your scene.

Notifying quantity updates

You may have noticed a different piece of code on the example above that I haven't talked about yet. The UIAccessibility.post. What is it, and why is it important?

This post method, allows us to notify the user through audible cues that something has changed on the screen. When using the .annoucement notification type, we can pass a String argument that will be read by the screen reader.

In this case, we pass the updated number of items, so every time a slide happens, it will tell the user the current item quantity.

The shopping list

This feature is part of our catalog, it allows the users to save grocery items into lists, and later apply these lists to any merchant, providing a fast way to organize and plan your purchases.

Three iPhone images displaying the states of the shopping list creation scene. First, the list is collapsed at the bottom, second the user can start dragging it up, and third the dragged view completely covers the one behind.
The shopping list creation view. By dragging the view on the bottom, the list scene covers the search scene.

The shopping list feature has an even more complex layout than the catalog itself. Animations, draggable contents, multiple actions… so we had to use a few more strategies to make it work with the voice-over.

Handling multiple actions

This scene, where the user can find their previously saved lists, allows selecting, deleting, and applying a shopping list. Visually, it’s simplified by the use of icons with well-known behaviors:

The select/edit/delete shopping list scene. When the kebab button is pressed, the cell cross-dissolves and shows the edit and delete actions.
  • The kebab icon (three vertical dots) enables the edition mode;
  • The close icon (tiny “x”) stops the edition mode;
  • The pencil icon starts the list edition;
  • The trash can icon deletes the list;

While it’s easy to comprehend these actions, given the icons and the associated labels, there are many functions encapsulated into a single point. And it even depends on an animated transition on the cell to go from the selection to the edition mode.

When thinking about screen readers, that’s a no-go, we do not have these components, so we need to find a better way to access these actions audibly.

This is the point where we used custom accessibility actions. This voice-over capability allows us to provide multiple actions on top of the same element by sliding up and down, similar to the adjustable trait, it changes which action will be executed when the element is activated (double-tapped).

First, we grouped the cell’s content as a single accessible item, making the internal buttons non-accessible, so each cell would be read as a single step. Then, added 3 custom actions, one for each that could be performed: select, edit and delete.

Example code for configuring custom accessibility actions

When providing custom accessibility actions, the element will gain the sentence “available actions” when read by the voice-over, indicating that the swipe up and down gestures can be used to change the activation action.

The available actions on the voice-over using the custom actions feature. To change the action swipe up and down and the reader notifies the user which action is selected.

You can also provide an accessibility hint to better explain these gestures, but keep in mind to not add too many things to be read which could make the voice-over navigation longer, repetitive, and more difficult to use.

Be careful with the views’ placement

This simple scene also had another accessibility issue due to its layout implementation: some of the elements could not be read.

When looking at the header of this modal scene, it would seem that the screen reader would read the items in order: close button — title label — create button, since the reader starts reading at the most top-left element, and moves right and down (it actually follows the natural reading direction for the device’s language). Given we had set the accessibility labels correctly for all the elements, everything should work just fine, right?

The header of the scene to select a shopping list. It's a close button, followed by the title "My lists", in Portuguese, and a "create list" button.
Select shopping lits view's header

Just by looking at this scene, it would seem so. But once we check the frames of these views, we can see that the title label’s frame is on top of the button elements. This was causing the voice-over to read the label but not the buttons, hiding critical actions of our view.

The header of the scene to select a shopping list., showing that the frame of the title label stays on top of the frame of the buttons.
Representation of the views' frames, the title label's frame goes on top of the buttons' frames

To solve this problem, we could have had 2 different approaches:

  • Reposition the items so the frames would no longer overlap;
  • Provide a reading order for the elements;

If the view was already built thinking about the voice-over and how it reads the elements, the first approach should be easier and maybe this issue would not even have happened. But sometimes it’s difficult to change the layout because it depends on other visual components, so you can provide a reading order for the view elements.

Example code for configuring the voice-over reading order.

Now, this will ensure that all the elements we want are going to be read, and this approach can also be used to provide a better reading order for complex layouts.

Reading overlaying views

When creating or editing a shopping list, our scene is composed of two parts:

  • A search, to find items to add to your shopping list;
  • And the shopping list itself, with all the items that have already been added so you can remove them or change their amount before saving your list;
Creating a shopping list. When an item is added to the list, a new scene appears on the bottom of the screen and can be dragged up until it covers the search scene behind it.

The list of items moves from the bottom of the screen when tapped and stays on top of the searched items. Visually this is totally fine since the elements below cannot be seen and cannot be interacted with while there is another view on top. But the items below were still being read by the voice-over.

To solve this issue, we had to look into another accessibility property of our view: a Boolean called accessibilityViewIsModal. According to Apple’s documentation, this property:

"[…] indicates whether VoiceOver ignores the accessibility elements within views that are siblings of the element."

This means that when a view has this property set to true, the reader will ignore all the other views, and only read the one marked as modal. In our case, when the list scene is expanded, we want the reader to read only the shopping list scene and ignore the search that's behind it.

Code to override the accessibility view is modal property.

We are overriding the property, instead of just changing its value, because when you need to change constantly an accessibility property, sometimes the value takes a while to update and reflect on the reader, but when overridden this update happens instantly. The list scene is only set to be modal when it's expanded on top of the search, otherwise, this property is false.

Using Magic Tap to navigate

To reach the shopping list items on this scene, a screen reader user would have to navigate through all the items on the screen, reach the collapsed list, and tap to open it. This may seem rather obvious, but the number of items displayed could be enormous, and the user would take a long long time to reach the list on the bottom.

To overcome this, we've decided to use another capability, the Magic Tap (tapping twice with two fingers). To do so, we need to override the method accessibilityPerformMagicTap on our view, and just like that, the magic tap is available.

Example code to implement the magic tap.

We’ve used this gesture, which is explained to the user through an accessibility hint, to alternate between the search scene and the shopping list scene, this way the user can easily navigate between these two scenes without the need to navigate through all the elements being displayed.

There is also another improvement: Focusing on the correct view after the transition. The magic tap is calling our expand/collapse animation — which is great! — but the voice-over can get lost during these transitions and a voice-over user wouldn't know that something has changed.

In order to notify the user, and focus the voice-over on the correct view after the transition, we call the post method on the UIAccessibility.

Example code to trigger the layout changed notification and focus on a new view.

We are now using the .layoutChanged notification type, and passing a view on the argument parameter. This will give the user an audio cue meaning that the layout has changed, and will focus the screen-reader on the view passed on the argument!

Wrapping up

These were some of the improvements we made in the last weeks, and I hope they gave you a broader view of what iOS voice-over accessibility features are and how to handle them. There are many options to use, and we keep learning more every day but to summarize our view about it:

  • Account for the accessibility implementation.

Planning, implementing, and testing these features takes time. They should not be treated as bonus content or as a nice-to-have, every new feature should have accessibility implementations as part of their plans from the beginning.

  • Make accessibility a team concern.

Sometimes it’s not easy to conciliate the visual ideas with a nice screen-reader experience. But, the more your team understands how the screen-readers work, the more you exercise it with them, the more you think about it when drawing or implementing a scene, the easier and better the results will be.

  • Keep the accessibility features close.

Knowing what can be done, the features and capabilities you can use will unlock a whole bunch of possibilities and clever solutions to translate the visual content into readable ones.

  • Test it and find testers.

Learn how to navigate using the voice-over, test your features, try to better understand if you are providing good navigation or not. And talk to users, get their feedback, and keep improving your app to enhance their experience.

We want our products to be the most widely available as possible, and this means acknowledging the differences of people and how they interact with what we do. Great products should be available to everyone, understanding this diversity is an incredible work of empathy, it will provide space and engagement to new groups, and it will always make us better and stronger people.

Take a look at the vacancies available in iFood and learn more about the selection process.

--

--