After sharing this on Twitter I've received fantastic feedback from a few folks who use screen readers regularly.
The main piece of feedback being consistency of content discovery when using various modes of navigation:
The main issue I'm having though is that with both JAWS and NVDA, the descriptions are only ever read if I traverse the links by pressing tab or shift+ tab. They don't read if I arrow around with the virtual cursor or if I bring up a links list.
With this in mind, I created a secondary demo which was more positively received!
As with any user interface/experience concept, be sure to share with, test, and gather feedback from real people! 👍
Note to the reader…
Please note that this is just a concept and not something to be used in production per se.
When it comes to imagery on the web and accessibility, there are a bunch of different image concepts to consider. Concepts such as a "content" image to help describe a product or a "decorative" image to help with visual context in a graphical user interface.
These concepts help you decide on how best to represent the image for assistive technology. Then in turn, assistive technology to help convey the image meaning or purpose to people with disabilities.
What happens, though, when none of the concepts help describe your particular context? 🤔
The little image that couldn't
Consider the following content view for an online furniture shop:
The above screenshot showcases an item listing view. Each of the items displayed are made up of an anchor element. Each element includes:
- an embedded thumbnail image;
- the product title;
- the product vendor;
- and the price.
The issue here is the potential verbosity of each link when encountered by a screen reader. The title, vendor, and price is quite a bit of information to take in on its own; do we also want to include the image description here, too?
This seems like it would be a lot of content to consume all at once.
One possible solution…
I think we can all agree that since the image is part of the link, having a thorough image description via alt attribute would generate an overly verbose announcement.
Let's try to avoid overwhelming our users with too much information at once.
So what's the solution here? Perhaps we could remove the alt text description entirely and declare each image as "decorative?" With this, screen reader users could activate each link and go to the item landing page for a more accurate description, right?
Actually, are these "decorative" images? In this particular context of an item list, is the text description enough for someone to gain an understanding while navigating the list?
As a sighted user, I have the benefit of scanning the list of item with my eyes to quickly gain the understanding of what the product is and how it looks. If I see something that interests me, I can click the link and view the product further. If the item doesn't appeal to me, I can easily move on and see what else is in the list.
On the flip side, someone who relies on assistive technology, such as a screen reader, would only have a basic understanding of each item.
In the above example, consider the "keywords" available to a screen reader user in each list item to describe the product. They might be:
Is this really enough to gain a clear picture of the item? More importantly, is this an example of equality? A** **sighted user could gain an understanding by simply viewing the thumbnail image, but someone who's blind would only receive a basic text description. No, I'd say this is not an equal experience.
Blind users should have equal access to content which sighted users have access to.
In order for someone to gain an understanding of each item more thoroughly with the "decorative" image concept, they'd need to activate each link, navigate the landing page, and find the description. This would result in much more work for a screen reader user in order to have that full picture available, where a sighted user could understand simply from the listing view.
A new image concept to consider
So, we know if we include a detailed image alt text description, it would be too much information all at once. On the other hand, not including a description would create user experience disparity which we should also strive to avoid.
What can we do to alleviate the situation?
Let's consider a new image concept not documented (yet), something I call:
The "Complementary" Image
What—he said the title of the post in the post! 😱
The idea is to make the image description available, yet, have it placed as secondary content to the main anchor element content.
How do we accomplish this? Let's first look at a basic example of one of the current item links:
Current item example
<a href="#"> <img class="image" src="table.jpg" alt="Round table top, birch wood color. Table is supported by three legs, black color. Legs stem from the middle of the table outward towards the floor." /> <span class="title">Key Side Table by GamFratesi</span> <span class="price">$179.00</span> </a>
In this example we see the image description set within the alt attribute. This is great to have, but as we've discussed, it could generate an overly verbose experience for someone to consume all at once.
So what's involved with a "complementary" image?
Complementary image example
<a href="#" aria-describedby="image-description"> <img class="image" src="table.jpg" alt="" /> <span class="title">Key Side Table by GamFratesi</span> <span class="price">$179.00</span> </a> <span id="image-description" hidden> Round table top, birch wood color. Table is supported by three legs, black color. Legs stem from the middle of the table outward towards the floor. </span>
The difference here is:
- The alt text content has been removed. With this, screen readers will ignore the image and move past when traversing page content.
- A hidden container element has been added which will hold the image description content. This container also has a unique id applied.
- The aria-describedby attribute has been added to the a element with its value set to the image description container id.
With this markup, the image description will still be announced, but, it will be secondary, or "complementary," to the current context; a listing of links.
In other words, the image text description is read after the primary link text. This solution now allows the user to hear the primary link text and the complementary image description text when navigating the link list, generating an equal user experience! 💯
What's great about this particular solution, too, is some screen readers will add a pause in-between the primary link text and the complementary image description when read aloud via aria-describedby attribute.
Check out the demo in action!
Try loading up a screen reader and listen to how each of the links in the listing sound! 🎧
What do you think of this solution? Is the "complementary" image concept something you'd consider using in your current or next project? Let me know!
Happy hacking! 😄⌨️🚀