Wednesday, July 12, 2023, 04:19 AM
Posted by Administrator
Digital-product designs do not become accessible by accident. It takes diligent planning and testing to improve the experience for users relying on assistive technology (there is a difference between accessibility and usability for users with disabilities). Unfortunately, it can be difficult to demonstrate a strong return on investment for accessibility improvements than for broader usability efforts because users with disabilities often make up only a small percentage of the product’s target audience. However, it’s exciting to see effort being put into accessibility because it is the right thing to do, not simply because it can make companies more money.Posted by Administrator
One of the most important steps to make sure any design works well is to test it with real users, who are representative of the intended audience. In the case of accessibility, this means testing with users with disabilities who rely on assistive technology. However, testing with users with disabilities can be intimidating for teams who do not feel they have the expertise or budget to conduct these types of tests. This article outlines a few suggestions for planning and conducting mobile-usability research with users who are blind or have low vision. These suggestions were inspired by a recent round of in-person user testing with these users. Although accessibility testing can be challenging in some ways, it has been some of the most insightful and rewarding research we have ever conducted.
Recruiting Screen-Reader Users
Although there are thousands of people who are blind or have low vision and who rely daily on assistive technology, these users are not well-represented in common recruiting databases and can, thus, be difficult to find. (The same problem occurs while recruiting minors or high-income participants.) Regardless, many users with disabilities are excited about opportunities to improve the accessibility of technology because it is such a constant challenge in their lives.
Specialized recruiting services for users with disabilities can be very expensive. In contrast, recruiting through word of mouth can be surprisingly effective and fairly cheap. By recruiting through word of mouth, our recent study cost less than half the amount that a major accessibility recruiting agency quoted us when we contacted it. This still allowed us to generously compensate each participant. Additionally, in almost every case, participants were quick to offer a follow-up meeting if needed and refer someone else with low vision.
Our first contact was with the president of the local chapter of the National Federation of the Blind (NFB). (Since our study was in person, it was essential to find participants living in our area. If your study is remote, you may consider contacting several different chapters of similar organizations.) He was happy to help us connect with members of the local NFB chapter. He also provided some insight as to the level of experience each individual had with technology and their level of vision.
If you want to conduct regular accessibility tests, we recommend building relationships with local individuals or organizations. Consider creating your own panel of users who can periodically participate in tests as needed. This approach will help you operationalize your accessibility-research methods so that you can efficiently and inexpensively conduct these types of tests at scale. As one user in our study mentioned, “I always try to get developers to hire blind people that they can work with because that's the best!”
Relationships with assistive-technology users could take the form of officially contracted employees or simply a database of individuals who have agreed to participate in research when available. This approach offers multiple advantages:
You are not limited in the number of tests you can conduct at any given time.
You do not pay recurring recruiting or overhead costs (you only pay incentives); therefore, your overall costs will be low and you will be able to compensate participants fairly.
You support local individuals with disabilities, who are often grateful for the chance to be compensated for meaningful work that comes because of a disability, not despite it.
You can easily find research participants who use assistive technologies, whenever you need them.
There are, however, some drawbacks to this method. It’s possible that you will introduce a selection bias because you are likely to make contact with the most actively engaged members of an organization or community. Additionally, if you test your designs repeatedly with the same set of participants, they may become subject to the same biases that employees are prone to when they test their own company’s designs. Keep those limitations in mind when creating your own participant panel, but realize that this approach is still better than conducting no accessibility tests at all.
Choose In-Person over Remote Research
Whenever possible, we recommend conducting accessibility research in person. In-person sessions can make things easier for the participants (especially if they are conducted in the participants’ homes) and can also offer insights into how participants hold and control their devices through direct observation. Here are a few examples of benefits we gained and issues we avoided by conducting our research in person rather than remotely:
Benefits of In-person Accessibility Research
Reliance on vision. Not all our participants were fully blind. It would have been impossible to see how some participants occasionally held the device close to their face and tried to take advantage of their limited vision if we had been on a video call.
Braille-device usage. Had we not been there in person, we would not have been able to see when and how some participants controlled their devices with an external Bluetooth Braille keyboard.
Screen-reader gestures. Screen readers afford many gestures for different functions of the device; none of which would have been visible on a video call.
Body language and facial reactions. We learned much about how our participants felt about various interactions through their physical reactions. This information was particularly helpful, given that they were limited in how much they could think aloud while the screen reader was speaking.
Clear screen-reader output. Sharing the sound through video-conferencing software often distorts or fully blocks the clear audio output of a screen reader, particularly when a participant is trying to speak over the top of it (which happened frequently).
Bypassing accessibility issues of video-conferencing software. We used a video-conferencing software to record the mobile screen in each session. This software proved to be extremely confusing and challenging for participants; it was good we were sitting right there to help out.
Conducting screen-reader research in person will undoubtedly be the smoothest and most insightful experience. In our case, we wanted to observe how users who are blind or low vision use a mobile device in a setting as realistic as possible. However, for more routine accessibility research focused on specific parts of a design, remote sessions can still be very valuable. Just bear in mind the many potential challenges and limitations of relying on a participant’s ability to successfully join and navigate a video-conferencing session. It is critical for the facilitator to be prepared to assist the participant when issues inevitably arise.
Don’t Make Participants Come to You
Additionally, whenever possible, go to participants’ homes or to a location that is familiar to them, instead of having them travel to your lab. There are several advantages for doing so:
Participants won’t have to navigate new areas of a city. As you might imagine, it can be extremely challenging for individuals who are blind or have low vision to visit unfamiliar locations, such as a lab.
You can learn about the context in which the participants use their devices. For example, it was interesting to see that several participants leave the lights off all the time (we had to ask if we could turn them on).
Participants are comfortable and talkative in their own homes. As we sat in the homes of these participants, we noticed that they were obviously comfortable, which made it easy for them to share their insights. In contrast, they may feel more nervous in an unfamiliar environment or on a video call.
Planning the Research
Use printed notes and plans. If you conduct the session in the participants’ homes, you will not likely have access to multiple monitors and will be limited to your laptop’s screen. Depending on how you plan to conduct the session, viewing what participants are doing on their device and recording the session can easily consume your laptop and occupy your one screen. This will restrict your ability to access documents such as notes, research plan, protocol, tasks, or questions while still observing what the user is doing. We recommend printing your research documents and being prepared to take some notes by hand. This form of notetaking is also quieter and less distracting for individuals who almost completely rely on their hearing.
Plan extra time. Setting up took longer in our tests than normal because we were visiting new environments for each session, and the participants became more easily lost and confused. It required at least 15 minutes to initially set up for each participant. It was also common to have to go back through elements of the setup process multiple times during most sessions (such as resharing the screen on the participant’s smartphone because they accidentally exited). If you only plan 90 minutes for a 90-minute session, you will likely end up with only 60 minutes of productive time. You might need at least 30 extra minutes beyond the planned length of your session.
Be familiar with a screen reader. Just like how user-research facilitators need to be familiar with the tasks used in the study, they also need to understand how a screen reader works on a phone (and how it will behave while performing the tasks of their study). First, this knowledge will allow them to follow what the participant is doing during the session. Second, it will enable them to help participants with logistical tasks such as sharing their phone’s screen or downloading an application on their phones.
Have participants use their own device. It will be almost impossible to gather accurate data if a participant must use a device other than their own. They have become so familiar with where things are located and the customizations of the screen reader that it would be a major hindrance to use a device provided by the researcher.
Recording the Session
It is essential to record sessions involving participants who are using a screen reader. We always recommend recording all research sessions, if possible. However, a study involving a screen reader has more moving parts than the average. Not only is it important to hear what the participant is saying and see what they are doing on the device, it is also critical to record the screen reader’s output and how the screen reader’s focus moves across the screen. In many cases, this happens very quickly and is nearly impossible to keep track of everything that is happening (screen reader’s outputs and participant’s actions) in the moment — particularly if you do not have extensive experience with a screen reader.
A laptop, document camera, smartphone, sticky notes, pen, and printed materials
For our recent study, we visited all study participants in their homes or workspaces. We took with us a laptop, a document camera, an external hard drive, printed tasks and study plans, sticky notes, and a pen.
We recorded the following elements of our sessions:
Session element
Recording technology
The participant’s screen
A video call that the participant joined on their mobile device and in which they shared their screen
The participant’s hands holding the device
A document camera
The participant’s comments
The document camera’s microphone
The facilitator’s comments
The document camera’s microphone
The screen reader’s audio output
The document camera’s microphone
This setup was similar to the one we use in other mobile-testing studies. The participant’s shared screen and the video feed from the document camera were positioned side by side on the facilitator’s screen. Both video feeds were combined in real time using a screen-recording software that was run on the facilitator’s laptop.
A screen cast of Zoom next to a document camera feed
We recorded a screen capture of the participants’ shared screen (left) and the document camera feed (right) so we could see a detailed view of what was happening on the screen while observing the participants’ gestures.
A screen cast of Zoom next to a document camera feed
Recording both the shared screen and the document-camera feed was particularly helpful when participants preferred to hold their device rather than leave it flat on the table.
Here are some tips for recording the session:
Record the audio of the screen reader. What a screen reader is saying is a very important part of a design. Just as it is critical to record how someone taps on their screen, it is critical to record what the screen reader says. You will need a microphone that is placed fairly close to the device to clearly pick up what the screen reader says (especially at high speeds). We used the microphone built into the document camera because it was right above the device.
Record the screen. The idiosyncrasies of using a screen reader caused participants to frequently adjust the way they held their devices. This made it difficult to ensure that the document camera could clearly capture what was happening on the screen. Visiting participants in their homes also made it difficult to control lighting conditions for filming. Recording the screen in addition to the document camera was essential for capturing what happened during the sessions.
Combine the recording of the screen and that of the participant’s gestures into a single video. During analysis is very challenging to synchronize and play back two separate videos (one screen recording and one document-camera feed) in a way that fluidly demonstrates what is happening. Baking these two feeds into one recording (we used a screen capture on the facilitator’s laptop) makes analysis much easier.
Use an external storage device. Depending on how you record, the video files can end up being very large. Make sure you have enough storage capacity to store all the data you collect — particularly if you will be conducting multiple sessions before you have a chance to offload data from your computer.
Setup Tips
Take charge of introductions. While you must respect someone's space, it is helpful to clearly explain where you are physically and what you are doing. Say things like, “I’d love to shake your hand,” or “I’m just going to sit over here by the wall” so that people are aware of what’s going on.
Tell participants what equipment you will be using. It’s considerate to briefly explain what equipment you will be using, where you are placing it, and what it is for. In our case, we tested in participants’ homes. We made sure to mention that we had brought a laptop that was connected to a document camera. We explained the purpose of the document camera, and even offered to let participants touch it so they could better understand what was sitting on the table in front of them. Help the participant get into a comfortable position, and then position your equipment around them.
Have a backup plan for internet. Depending on how you plan to record the session and the types of tasks you will ask users to perform, it is likely that you will need a strong internet connection. Ideally, you can request permission during setup to connect to a participant’s WiFi in their home. However, if this has issues (as it did for us in multiple cases), you will need to be prepared to help participants connect to an alternative internet source.
Facilitation Tips
Provide timing updates. Participants who cannot see well cannot quickly glance at a clock to check how much time has passed. Because participants have allowed you to take charge of the session, they are relying on you to keep track of time. Depending on how long the session is, it’s nice to give people updates periodically to help them gauge how much longer the session will last. They are also relying completely on you to end the session on time.
Setting the expectation to think aloud. In most user tests, participants can easily share their thoughts as they browse an interface. However, screen-reader users compete with the audio output of the screen reader, which is constantly making noise as they do anything in the design. Additionally, users cannot be expected to talk while the screen reader is speaking because they must listen to understand what is going on. Emphasize that you need the participant’s help to get a sense of what they think about the design, but also to be patient as they process what they are hearing before they can pause to share insights with you.
Give verbal confirmations. In sessions with sighted users, it is typical to nod and use body language to show you are listening and understanding what participants are sharing. In fact, it’s often best to intentionally stay silent so that participants have a chance to think and speak without being biased by the researcher’s comments. With low-vision users, everything must be audible. It can be confusing or disconcerting for a participant if they cannot tell if you are still listening and understanding what they are saying (much like in remote-testing scenarios where participants cannot see you on camera). Simple, neutral verbal cues such as “mmm” or “I understand” can be substituted for body language cues such as focusing on the screen, making eye contact, or nodding to show you are following. However, don’t be afraid of a little silence to prompt them to keep going.
Avoid using the word “see.” It is common for researchers to ask participants their thoughts about what they are “seeing” as they use an interface (e.g., “What do you see?”, “What do you think about what you see?”), because it is an easy way to stay neutral without referring to anything specific on the screen. Obviously, users who are blind or partially blind cannot see much. Try to use words like “find” “come across” “run into” and “search for,” which apply equally well to these users.
Strive for authentic screen-reader speeds. Screen-reader users often set their screen readers to speak very quickly; however, this high speed can pose a challenge for sighted researchers who do not normally use a screen reader and may have trouble following its output at high speed. To capture the phenomena as they naturally occur (and ensure the study’s external validity), encourage participants to use their screen reader at their preferred speed. It is not always critical to understand everything a screen reader says and can be referenced later if you capture a high-quality recording. If you really struggle with following along, you can let participants know that you need them to slow it down but try to avoid requesting that as much as you can to preserve authenticity.
Document-camera adjustments. Screen-reader users often hold their phones differently from sighted users. While picking up a document camera and holding it at odd angles would be distracting for sighted users, participants who do not rely as much on sight will generally not mind if you are moving the camera around as they change the way they are holding their phones. In some cases, we lifted the document camera up to better capture what was on the screen as the participant had unknowingly moved their hands out of the camera’s view. This was not at all bothersome to the participant.
Explain what you are doing. While it might not be bothersome to participants to adjust your equipment to fully capture interactions, if the participant does hear you moving equipment, it is considerate to explain what you are doing. It can make the participant nervous if they can hear you moving things around but are unsure what’s going on — particularly if you are in their home.
The document camera must be able to capture everything if the participant adjusts how they are holding the device.
Participants often changed the way they were holding devices throughout the session. We needed to constantly readjust the camera to capture what was happening. In this photo, the participant is typing with an on-screen Braille keyboard.
The document camera should be adjusted as the participant adjusts how they are holding the device.
This participant held the phone with one hand at an odd angle and swiped to control the screen reader with the other. We needed to adjust the camera angle multiple times to capture these actions to allow the participant to use their device normally.
Conclusion
Conducting research with users who are blind or have low vision requires a little extra planning and adaptation compared to typical tests, but it is critical for truly improving the accessibility of designs. Organic, grassroots recruiting methods can be very effective and inexpensive. Think through any parts of your research setup or tasks that might be difficult for users who cannot see, and plan to accommodate those needs. Don’t be afraid to adapt on the go.
add comment
( 438 views )
| permalink
| ( 2.7 / 479 )