Warning: The following recommendations have not been subjected to usability testing by people with disabilities. All recommendations in this post are purely speculative using my best judgement and should be considered a work-in-progress.
Note: Testing was originally conducted on
0.1.1 in early 2019.
When I first heard about Shopify's project to include video and 3D models on products pages (around ), I was pretty excited. "Product videos and 3D models? That's cool!" Only seconds after consuming the news of this feature being built (and released in a few months time), another thought ran through my mind:
"3D models on the web… how are we going to make those accessible? How do you convey a "3D model", let alone provide access via assistive technology?"
After taking some time to come to terms with this daunting realization, I did what any professional web developer would do; I took my questions to Google.
Searching for "accessible 3d models" quickly confirmed my suspicion; not a whole lot of information was available. 3D models on the web have existed for some time, yes, but a solution which catered to assistive technology? None of the accessibility bloggers I follow had written about the topic, nor was there anything applicable from W3C WAI that I could find. So, what's next after Google fails you? Twitter, of course!
I asked the question if anyone knew of an accessible 3D model solution. I honestly didn't expect anyone to respond. I thought to myself, "This technology is so new. No one's really explored this area of the web yet. I guess I'm on my own to figure this out."
To my surprise, a few days later I received a reply on Twitter:
Suffice it to say, I was beyond stoked to receive this reply. It seemed like a solution might be possible. Providing an accessible 3D model experience for Shopify Partners to implement, our millions of merchants to serve, and their customers to consume might actually be a reality.
As it turned out, Google's
<model-viewer> web component was, in fact, the 3D model component Shopify's Rich Media team was intending to implement. (I know, right? What are the odds?) With this I decided to take some time from my ever growing to-do list and conduct several rounds of testing. In order to gauge exactly how accessible things were and to make recommendations (for both Google and Shopify teams), I needed to thoroughly test
<model-viewer> with assistive technology.
What is a 3D model?
Before we dive into the test results, let's attempt to define what a 3D model is, and what it is not. The answer to this question will be critical when (attempting) to convey the presence of a 3D model on the web for assistive technology. In other words, "What is this thing? What does it do? How do I interact with it?"
Firstly, a 3D model is not an image (HTML
img element). Images are static, portray a 2-dimensional, single-sided view of an object or scene. Images do not require user interaction (other than discovering the image and consuming its
alt text via screen reader.) Therefore, a 3D model should not be conveyed as an image element.
Second, a 3D model is not a video (HTML
video element). Yes, video is a dynamic medium; it requires user interaction in order to consume its content. But video is a passive medium. Meaning, once you press play, the user only needs to sit back and enjoy the show. Other interactive elements are available (timeline scrubber, mute, closed caption controls, etc), but are not required for the majority of the user experience. Therefore, a 3D model should not be conveyed as a video element.
So what is a 3D model? You may already have an answer for this in your own mind. My attempt to answer this question is:
"A 3D model represents a real-world physical object. An object which not only features width and height but also depth. Viewing an object in the third dimension allows for inspection of all angles of the object."
Okay great, that makes sense (at least to me.) But how do we describe this in terms of an interactive "thing" on the web? What semantic meaning exists to inform the user of the object they're currently interacting with? And how do they interact with said object?
I think I have a good answer for these questions, but first let's dive into some assumptions and expectations on what may constitute an accessible 3D model.
What makes a 3D model "accessible?"
Here's my criteria list for, what I would consider, an accessible 3D model user experience. While these criteria have not been user tested (yet), I feel like the information conveyed would be enough for someone using various assistive technology to understand what the component is, and how to interact with it. Again, full disclosure, I'm using my best judgement here.
3D model usability expectations/assumptions
- A mouse user should be able to click-grab/swipe and scroll for model rotation and zoom
- A sighted mobile user should be able to swipe and use pinch gestures for model rotation and zoom
- A sighted, keyboard-only user should see a visible focus state when the 3D model has keyboard focus
- A keyboard user should be able to rotate the model horizontally and vertically via keyboard alone
- A keyboard user should be able to zoom the model in and out via keyboard alone
- A screen reader user should hear an appropriate
roledescribing the component as a 3D model
- A screen reader user should hear a brief description of the 3D model object
- A screen reader user should hear additional hint text to further describe how to interact with the 3D model
- A screen reader user should hear descriptive announcements of the visible portion/angle when the 3D model has been rotated
- A screen reader user should hear an announcement on zoom describing the current zoom level
- A mobile screen reader user should be able to swipe and use pinch gestures for model rotation and zoom
- A voice dictation user, switch user, or anyone with limited mobility should be able to rotate and zoom the 3D model using dedicated
buttoncontrols for each piece of dynamic functionality
With this support in place, someone using a mouse, keyboard, mobile device, screen reader, voice activation, or a number of other input technologies should be able to understand and interact with the 3D model. That's the theory, anyway.
With these criteria in mind, let's dive into some test results and review how
<model-viewer> measured up against the above criteria.
Initial test results
With these results, it's clear VoiceOver had the best support. This is due to how VoiceOver's virtual cursor requires more than a single arrow key press to traverse content. Others like NVDA or JAWS simply use the Up and Down arrow keys which move their cursors past the model instead of the expected vertical rotation. There is a way to circumvent this which we'll discuss later.
Recommendations to the Google team
<model-viewer> for web accessibility best practices, it was clear right away the team at Google put thought and effort into making this web component accessible. Features such as arrow key support for model rotation and screen reader announcements for model stage locations were built in by default.
During testing, I noted a few key pieces which could make the component even more usable with assistive technology.
1. Include a focus ring via
The model element was missing a visible focus indicator on keyboard focus. While it may be desirable to remove the default focus ring which appears on mouse click, a focus ring must be present and visible when keyboard users interact with the model. Sighted users need to know where they are on the page when navigating through content.
I suggested a possible work-around of implementing the focus-visible polyfill. The idea would be to encapsulate the polyfill within
<model-viewer> in order to provide a focus ring. With this in place, the team would be free to remove the outline for mouse/touch but display the outline for keyboard only.
2. A more accurate
The 3D model (at the time) was drawn to the screen via HTML
canvas. When interacting with
canvas with assistive technology, the default
role is "image" or "graphic" (depending on the operating system/assistive technology.) As discussed previously, this description of the currently focused element is not accurate; a 3D model is not a static image.
To get around the fact that "3D model" is not a native media type with an organic role, my recommendation to the Google team was to include the
aria-roledescription attribute directly onto the
canvas element. This attribute allows the author to set a custom "role" value which will be announced as the element
role – the "thing" you're currently interacting with. In this case, I suggested adding the
aria-roledescription="3d model" attribute to announce the
canvas element as, "3d model".
<canvas … aria-roledescription="3d model"></canvas>
It's worth noting using
aria-roledescription can be a little destructive. This attribute overrides the native element
role which, probably 100% of the time, may not be ideal. In the context of a piece of content which has no native role, I feel using
aria-roledescription to help describe a 3D model component is an appropriate use case.
Check out Adrian Roselli's aptly titled, "Avoid
aria-roledescription" for more details on the pitfalls of using this attribute.
3. Provide user-defined descriptions per stage variant
When the keyboard arrow keys were used to adjust the angle of the model, there was an announcement made to alert the user of the visible change.
It's great how this need was thought of and included in the web component by default. However, my next thought after making this discovery was, "Can this be configurable? Is it possible to add more description of the object at a specific angle?" Each stage description could be thought of in the same manner as describing a set of static images.
Take for example, a 3D model of a baseball hat. The model would start in its default, front facing position. Upon discovery, there might be an audible description of its physical features, including color, hat style, and perhaps a logo on the front. The user, using the keyboard arrow keys, could rotate the model 180 degrees to review the back of the hat. When this point of interest is reached, another audible description would inform the potential customer of a brown leather strap for sizing fit. The user was actually hoping for a full-back style hat. With this information, they might decide to move on to review other products.
How might this be accomplished? Currently in order to provide a description for the model, the
<model-viewer> web component itself takes an
alt attribute. For example, the Glitch demo features:
<model-viewer … alt="A 3D model of an astronaut"> <!-- … --> </model-viewer>
One idea I had for this could be to introduce a set of alternate
alt attributes providing stage variant descriptions. Something like:
<model-viewer … alt="A 3D model of an astronaut" alt-stage-front="Astronaut wears white space suit with helmet. Backpack straps wrap around its chest." alt-stage-left="Astronaut shoulder features space logo." alt-stage-back="Astronaut wears large, white backpack with black highlights." … > <!-- … --> </model-viewer>
Why is this important? These additional descriptive announcements would provide more clarity on the model, describing all angles of its physical features. As a sighted user would be able to see the physical aspects, a blind screen reader user needs these features to be described aloud. This is the equal user experience we, as creators of the web, should strive to achieve.
4. Intercepting keyboard input
Screen readers feature their own keyboard commands for traversing web pages and navigating content. This is typically called the screen reader virtual cursor. For example, when using NVDA, pressing the Up or Down arrow keys navigates and announces all types of content on the page, not just focusable elements like links or form controls.
In the case of the 3D model, the
canvas element made use of the
arrow keys to rotate the model horizontally and vertically. When interacting with the model while running a screen reader such as NVDA or JAWS (since they use single arrow key events to traverse page content), content within the
<model-viewer> DOM is navigated and announced instead of adjusting the angle of the model. This is not exactly the expected outcome.
To get around this dilemma, my suggestion to the Google team was to include the
role="application" attribute directly onto the
canvas element. Including this
role value allows the arrow key press events to bypass the screen reader entirely and send the events directly to the underlying application. In this case,
<canvas … role="application"></canvas>
In my 10+ years in the accessibility community, this has been the only real-world use case I've come across for the
application role. I'm also aware to recommend this
role value with caution as it greatly affects keyboard navigation when using a screen reader. The ARIA 1.1 spec states:
When there is a need to create an element with an interaction model that is not supported by any of the WAI-ARIA widget roles, authors MAY give that element role application. And, when a user navigates into an element with role application, assistive technologies that intercept standard input events SHOULD switch to a mode that passes most or all standard input events through to the web application.w3.org/TR/wai-aria-1.1/#application
Léonie Watson has a great overview of
role="application" in the post, "Understanding screen reader interaction modes".
Test results with some recommendations applied
I tested my recommendations of adding
aria-roledescription in order to confirm the recommendations (and for my own accessibility nerd curiosity.) Let's review the results.
Note: The test environment was a local fork of the GitHub repo. Both
aria-roledescription attributes have been applied.
It was clear these attributes did help in the accessibility of the 3D model viewer. VoiceOver and NVDA seemed to have the best support by announcing the
aria-roledescription attribute values as expected. With
role="application" set in place, JAWS and NVDA users would be able to rotate the model using the arrow keys.
Here's an overview of the major issues from the tests outlined above.
Aside from not being able to test this second demo with IE or Edge, it's iOS which had the most issues. It seemed like, with either demo, iOS with VoiceOver enabled completely ignored the
canvas element. When using swipe navigation, even with
tabindex applied, it was bypassed completely.
According to HTML5Accessibility.com,
canvas elements can be made accessible by including child elements within
canvas. However, in the case of a 3D model viewer accepting events from the
canvas element directly, including child elements would not be helpful.
VoiceOver + Safari desktop performance issues
When paired with Safari browser on macOS, VoiceOver seemed to struggle with "loading" the
canvas content. On focus, VoiceOver would announce, "Safari busy." When using the arrow keys to adjust the model position, again, "busy" was announced. While there were clearly performance issues with this combination, the same could not be said for Chrome paired with VoiceOver on macOS; no perf issues whatsoever.
Oddly enough, when switching windows away from Safari while
canvas was in focus, then returning back, this sometimes helped with loading performance.
Chrome for Android missing model description
canvas focus, Chrome for Android announced the angle/stage of the model in its
aria-label by default, instead of the model description. This essentially bypassed the announcement of what it was the user would be interacting with.
This issue was confirmed by using the Chrome remote inspector and reviewing the
aria-label attribute value on page load.
For an overview of all issues sent to the Google team, and opportunities to contribute to this incredible open source project, visit
<model-viewier> on GitHub.
Recommendations to the Shopify team
So far we've reviewed testing results and recommendations which affect the "back end" of the 3D model. This is the piece which the Google team would have impact in making
<model-viewier> more accessible.
The user experience implementation, or "front end", is where the Shopify team comes in. For each Shopify Theme which features 3D model support, extra work is required to integrate
<model-viewer> into the theme.
When testing the implementation for Shopify's default theme, Debut, I noticed some extra accessibility issues. Here's a few highlights of the recommendations I sent to the team.
1. Swipe gestures should not be required
On a mobile or touch-screen device, rotation of
<model-viewer> required swipe gestures to rotate the model. Depending on the mobile platform, with a screen reader enabled, this would require either two or three finger swipe gestures. As a result this could create a difficult or frustrating user experience for users with limited mobility, such as someone who uses voice dictation software.
My recommendation was to implement a single
button control for model rotation; up, down, left, right. This would provide an optional input method and allow for easy use of 3D model rotation (dedicated controls for zoom and full screen were already in place.)
It's also worth pointing out in the upcoming release of WCAG 2.2, the new success criteria 2.5.1 Pointer Gestures states:
"All functionality that uses multipoint or path-based gestures for operation can be operated with a single pointer without a path-based gesture, unless a multipoint or path-based gesture is essential."
With this in mind, I'd argue rotation of a 3D model is essential to its usability.
2. Zoom does not announce a result
When using the dedicated controls to zoom in or out of the 3D model, the result of which was not communicated to screen reader users.
For this issue I recommended adding a
role="status" element to the DOM in order to announce the current zoom level. When either the zoom-in or zoom-out controls are clicked, an announcement would be made alerting the user of the current zoom level as a percentage.
In order to avoid screen reader users from discovering this content out of context, the
aria-hidden attribute would need to be toggled from
false when the announcement is made, then back to
<div class="visually-hidden" role="status" aria-hidden="true"> Zoomed 50%. </div>
3. Controls visible on hover only
The 3D model controls for zoom and full screen were visible only on mouse hover. This created a difficult user experience in reaching the controls for sighted keyboard-only users, zoom-text users with low-vision, voice dictation users, or anyone not able to use a mouse.
The recommendation here was to make the controls available and visible on keyboard focus. Clearly this still does not cater for all user needs such as voice dictation users being able to call out clicking of a button control they cannot see. Other work-arounds will need to be considered and implemented in time.
APG outline and live demo
Here's where I attempt to make an ARIA Authoring Practices style outline for a 3D model pattern. If you're unfamiliar, the ARIA Authoring Practices site provides keyboard and
aria-* attribute best practices for dynamic, non native components. Definitely give it a read the next time you're creating a non native UI pattern.
Okay, let's give this a go…
A 3D model represents a real-world physical object. An object which not only features width and height but also depth. Viewing an object in the third dimension allows for inspection of all angles of the object.
3D Model Example : demonstrates a 3D model of a real-world item on a product page of a demo e-commerce store.
The following terms are used to describe components of a 3D model.
- 3D Model
- A single content container that holds the content to be presented by the 3D model component.
- Rotation Control
- An interactive element that commences 3D model rotation.
- Zoom Control
- An interactive element that zooms the 3D model.
- Full Screen Control
- An interactive element that sets the 3D model view as full screen.
- Live Announcements
- A hidden element that contains stage descriptions and update messages.
When the 3D model component has keyboard focus:
- Enter or Space: Announces the current stage position with description (if available.)
- Tab: Moves focus to the next focusable element in the page Tab sequence.
- Shift + Tab: Moves focus to the previous focusable element in the page Tab sequence.
- Esc: Resets the model to its default starting position and zoom level.
- Left Arrow: Pans model horizontally.
- Right Arrow: Pans model horizontally.
- Up Arrow: Pans model vertically.
- Down Arrow: Pans model vertically.
- +: Zooms the model in.
- -: Zooms the model out.
WAI-ARIA Roles, States, and Properties
- The 3D Model has role
- The 3D Model has the
aria-roledescriptionproperty set to
- The 3D Model has the
aria-labelproperty set to provide an accessible name.
- Optionally, the
aria-describedbyproperty is set on the 3D Model to indicate how to interact with the component.
- The Live Announcement element has role
- The Full Screen control has an
aria-pressedstate. When the button is toggled on, the value of this state is
true, and when toggled off, the state is
Conclusion, for now
At this point we've dug deep in testing
<model-viewer> for various screen reader environments, but this is only the tip of the assistive technology iceberg. We also need to cater to and address potential issues for:
- Sighted keyboard-only users
- Low-vision zoom users
- Braille readers
- Voice dictation software
- Switch access
- And more…
In order to produce an accurate and comfortable user experience for all, usability testing with real people is a must. (It's unfortunate this didn't take place pre-launch but this is on our radar moving forward.)
As creators and maintainers of the web, it's our responsibility to be mindful of people's needs, to avoid creating access barriers, and by extension, to eradicate ablest design.
With the testing results shown here, I believe the infrastructure does exist to provide accessible 3D models for people with disabilities. Both Google and Shopify teams were happy and eager to receive the test results in order to work together on creating the most accessible user experience. I'm confident 3D model usability and accessibility will get better over time.