cleARsight for Magic Leap is a Spatial Computing Accessibility application, to improve the daily lives of individuals with Low Vision.
AR UX Engineer / Technical Artist
According to the National Center for Health Statistics, 26 Million Americans Adults experience significant vision loss, creating challenges with Depth Perception, Low-Light scenarios and proprioception.
We decided to create an application using Spatial Computing technology to help low vision individuals to improve their life. Our team consists of 1 project manager, 2 software developers, and 2 UX engineers. Since the application is designed for low vision and blind individuals, it is essential to use not only rely on sight, but also touch and hear.
What do low vision and blind individuals need help with? And How can an App help them?
To understand their needs, user research is a crucial step. One of our team member had experience working with our target users and therefore provided us with some data and insights. Below are the top 3 helps they need the most.
Depth sensing & Object outlining
After gaining understanding on their needs, I started thinking about how to create an application that utilize visual, touch and hearing to help low vision and blind individuals?
Fortunately, Magic Leap One has many functionalities and the team decided to utilize its spatial audio and haptic feedback on the controller as a virtual cane, while adding outlines and color overlays on hazardous objects. Finally, user should be able to place memos in space and playback at anytime.
In order for us to test our ideas, I had to develop a prototype to verify if the hypothesis works. The first iteration of the virtual cane is to shoot a sound wave when user pulls the trigger to wherever the controller is pointing at and vibrates when the sound wave hits a wall or an object.
Meanwhile, I created a shader that outlines the edge of all objects Magic Leap picks up so that users can tell what around them.
However, soon I realized that this require user to constantly pulling the trigger and it gets really tiring after a long time. Moreover, user only receives feedback when providing input which may cause a discontinuation from the flow. The outlines are way too messy since it draws over all the objects including the ground and basically ruins the point of showing contrast.
To solve the problem found in first iteration, I tried another attempt where I changed the input mode from one input one feedback to one input continuous feedback.
For the outlines, I made sure that the user only sees object outlines that are within 6 meters from their position, and don't draw outline for the ground.
At the same time, a color should be overlayed on top of hazardous object such as bags and wires on the ground.
User then can use the bumper button to switch between different display modes from
Outline and overlays
Things look a lot better now.
What about memos?
During the design of virtual cane and object outlining, we never gave too much thought for memos.
We have a problem
Trigger and bumper are all used for something else, only touchpad and home button is left. How can we incorporate memo system with other features so that the learning curve for the entire application is easy and short?
Press home button to record a memo and press it again to stop.
User thinks that they have to hold the home button to record and release to stop most of the time, and holding home button actually quits the app.
Swipe up on touchpad to start recording, swipe down to stop recording.
User gets confused with swiping since the gesture is not as simple as a button press
Keep the control from attempt 2, but add tutorial mode where press home button to enter and shows all control instructions.
User starts in tutorial mode and learn first before they act.