Web
Analytics
cleAR sight. | peterlu

cleAR sight.

cleARsight for Magic Leap is a Spatial Computing Accessibility application, to improve the daily lives of individuals with Low Vision.

Role:

Duration:

Team Size:

AR UX Engineer / Technical Artist

3 Days

5

According to the National Center for Health Statistics, 26 Million Americans Adults experience significant vision loss, creating challenges with Depth Perception, Low-Light scenarios and proprioception.

We decided to create an application using Spatial Computing technology to help low vision individuals to improve their life. Our team consists of 1 project manager, 2 software developers, and 2 UX engineers. Since the application is designed for low vision and blind individuals, it is essential to use not only rely on sight, but also touch and hear.

the problem

What do low vision and blind individuals need help with? And How can an App help them?

user research

To understand their needs, user research is a crucial step. One of our team member had experience working with our target users and therefore provided us with some data and insights. Below are the top 3 helps they need the most.

  • Depth sensing & Object outlining

  • Virtual Cane

  • Memos

After gaining understanding on their needs, I started thinking about how to create an application that utilize visual, touch and hearing to help low vision and blind individuals?

our hypothesis

Fortunately, Magic Leap One has many functionalities and the team decided to utilize its spatial audio and haptic feedback on the controller as a virtual cane, while adding outlines and color overlays on hazardous objects. Finally, user should be able to place memos in space and playback at anytime.

first iteration

In order for us to test our ideas, I had to develop a prototype to verify if the hypothesis works. The first iteration of the virtual cane is to shoot a sound wave when user pulls the trigger to wherever the controller is pointing at and vibrates when the sound wave hits a wall or an object.

Meanwhile, I created a shader that outlines the edge of all objects Magic Leap picks up so that users can tell what around them.

However, soon I realized that this require user to constantly pulling the trigger and it gets really tiring after a long time. Moreover, user only receives feedback when providing input which may cause a discontinuation from the flow. The outlines are way too messy since it draws over all the objects including the ground and basically ruins the point of showing contrast.

second iteration

To solve the problem found in first iteration, I tried another attempt where I changed the input mode from one input one feedback to one input continuous feedback.

For the outlines, I made sure that the user only sees object outlines that are within 6 meters from their position, and don't draw outline for the ground. 

At the same time, a color should be overlayed on top of hazardous object such as bags and wires on the ground.

User then can use the bumper button to switch between different display modes from

  • Outline and overlays

  • Only outlines

  • Only overlays

  • None

Things look a lot better now.

third iteration

What about memos?

During the design of virtual cane and object outlining, we never gave too much thought for memos.

We have a problem

Trigger and bumper are all used for something else, only touchpad and home button is left. How can we incorporate memo system with other features so that the learning curve for the entire application is easy and short?

attempt 1:

Press home button to record a memo and press it again to stop.

outcome:

User thinks that they have to hold the home button to record and release to stop most of the time, and holding home button actually quits the app.

attempt 2:

Swipe up on touchpad to start recording, swipe down to stop recording.

outcome:

User gets confused with swiping since the gesture is not as simple as a button press

attempt 3:

Keep the control from attempt 2, but add tutorial mode where press home button to enter and shows all control instructions.

outcome:

User starts in tutorial mode and learn first before they act.

final product

CleAR Sight was mentioned in the Boston Globe!

what's next?

We'd like to continue to extend the feature set of this application and further bolster its usability with additional user testing and integrations to further improve its offering to low vision individuals.

 

Some examples are: Dynamic Object Recognition

  • Spatial Mapping data persistently analyzed, matching and expanding 3D Object Database for on-demand contextual information and passive Machine Learning Training.

Voice Recognition

  • Custom voice-triggered commands, Action events (confirm, cancel, etc.), Speech-To-Text parsing and Personal Assistant integration.

IoT, Home & Ecosystem Integration

  • Open-Source API & SDK for developers to implement in products and services.

Geospatial Positional Synchronization

  • Landmarks, known paths, obstacles and other context-sensitive annotations.

High-Speed & Hazard Warnings

  • Detect high-speed movement alarm for vehicles, cliffed overhangs and environmental obstructions.