Meta’s announced some additional accessibility and user support features, including audio explainers in Ray-Ban Meta glasses, sign-language translation in WhatsApp, wristband interaction developments, and more.
First off, Meta’s rolling out expanded descriptions in Ray-Ban Meta glasses, which will help wearers get a better understanding of their environment.

As explained by Meta:
“Starting today, we’re introducing the ability to customize Meta AI to provide detailed responses on Ray-Ban Meta glasses based on what’s in front of you. With this new feature, Meta AI will be able to provide more descriptive responses when people ask about their environment.”
That’ll give people with variable vision more options in understanding, with audio explainers fed straight into your ear on request.
It could also make Meta’s smart glasses an even more popular product, for an expanding range of users. The addition of on-demand AI helped to boost sales of the device, and these types of add-on assistance functionalities will also broaden their audience.
Meta says that it’s rolling this out to all users in the U.S. and Canada in the coming weeks, with additional markets to follow.
“To get started, go to the Device settings section in the Meta AI app and toggle on detailed responses under Accessibility.”
Meta’s also adding a new “Call a Volunteer” feature in Meta AI, which will connect blind or low vision individuals to a network of sighted volunteers in real-time, to provide assistance with tasks.
On another front, Meta’s also pointed to its work in developing work on sEMG (surface electromyography) interaction via a wristband device, which uses electromagnetic signals from your body facilitate digital interaction.
Meta’s been working on wrist-controlled functionality for its coming AR glasses, and that’ll also enable greater accessibility.
Meta says that it’s currently in the process of building on its advances with its wrist interaction device:
“In April, we completed data collection with a Clinical Research Organization (CRO) to evaluate the ability of people with hand tremors (due to Parkinson’s and Essential Tremor) to use sEMG-based models for computer controls (like swiping and clicking) and for sEMG-based handwriting. We also have an active research collaboration with Carnegie Mellon University to enable people with hand paralysis due to spinal cord injury to use sEMG-based controls for human-computer interactions. These individuals retain very few motor signals, and these can be detected by our high-resolution technology. We are able to teach individuals to quickly use these signals, facilitating HCI as early as Day 1 of system use.”
The applications for such could be significant, and Meta’s making progress in developing improved wristband interaction devices that could once day enable direct interaction with limited movement.
Finally, Meta’s also pointed to the evolving use of its AI models for new assistance features, including “Sign-Speak,” developed by a third-party provider, which enables WhatsApp users to translate their speech into sign language (and vice versa) with AI-generated video clips.

That could end up being another advance for enhanced connection, facilitating more engagement among differently abled users.
Some valuable projects, with broad-reaching implications.
You can read more about Meta’s latest accessibility advances here.