THIS CONTENT IS BROUGHT TO YOU BY Oslo Metropolitan University - read more

People with visual impairments are in great need of better digital aids to find their way around, especially in unfamiliar surroundings.
People with visual impairments are in great need of better digital aids to find their way around, especially in unfamiliar surroundings.

How artificial intelligence can help the visually impaired

Traditional navigation tools for the visually impaired are often impractical and require extensive training. Rapid advancements in artificial intelligence and increasingly more computing power in smartphones offer new opportunities.

People with visual impairments are in great need of better digital aids to find their way around, especially in unfamiliar surroundings.

Now, researcher Bineeth Kuriakose has found a practical and affordable solution using artificial intelligence and sensors in smartphones.

Kuriakose first took a closer look at the navigation assistant systems that have already been developed for people with visual impairments. He found that they are often not particularly usable.

Unmanageable solutions

Many of these navigation assistant systems rely on hardware boards like the small credit card-sized computer Raspberry Pi. Others demand integration with laptops – both of which entail unwieldy setups. 

They often require prolonged training periods and lack the portability needed for outdoor usage.

Despite the proliferation of smartphone-based solutions, they frequently require an active internet connection, rendering them unsuitable in areas with limited connectivity, such as garages and basements.

Bineeth Kuriakose has found a practical and affordable solution using artificial intelligence and sensors in smartphones.

Most contemporary systems also rely on cloud resources, introducing processing delays that hinder real-time navigation – a critical factor when every second counts. 

Many of these systems tend to prioritise technology over addressing the practical challenges faced by users during navigation.

Better and cheaper

The rapid development of artificial intelligence and increasingly greater computing power of smartphones, on the other hand, make it possible to develop far more usable solutions.

“This technological synergy can be used effectively to develop a navigation assistant for people with visual impairments,” Kuriakose says. 

The solution can then also be made available to many more people.

Statistics from the World Health Organization (WHO) underscore the magnitude of this issue, with 200 million people facing impaired vision and lacking access to technological aids. 

WHO is committed to improving access to high-quality, affordable assistive technology for everyone, everywhere. 

Therefore, an affordable and user-friendly navigation system will be of great significance.

Kuriakose’s primary goal has been to empower the visually impaired to navigate independently, leveraging the latest technological innovations, especially in artificial intelligence and smartphone sensors, to fulfil their unique needs.

A smartphone-based navigation assistant

Deep learning is a type of artificial intelligence that mimics the way humans acquire knowledge.

Collaborating closely with visually impaired individuals, Kuriakose designed and developed DeepNAVI. It is a deep learning-based smartphone navigation assistant to help people with visual impairments find their way around, increasing their independence.

The navigation assistant is taught to perceive the common obstacles and ways out that blind and partially sighted people may encounter.

DeepNAVI employs deep learning models to detect obstacles in real-time, harnessing the artificial intelligence capabilities of smartphones.

It uses sensors in the smartphone to provide detailed information about obstacles. This includes distance, position, and movement.

The system also learns to recognise different locations, such as a kitchen, an office, a garage, or a street, so that the system can help the user to identify the layout of the environment.

Kuriakose explains: 

“The system we have developed can tell you what kind of obstacles are in front of you, and how far away they are. Or whether something is moving or not, and whether it is to the right or left of you.”

DeepNAVI can easily be installed as an app on an Android phone. With the smartphone securely stored in a waistcoat pocket, it captures real-time video of the surroundings and relays feedback to the user through wireless earphones. 

User testing with visually impaired individuals yielded positive feedback regarding usability. It was easy to carry and effective in use. 

Through his research, Kuriakose found that a smartphone alone can be used as an assistant to help people with visual impariments to navigate. That is, without using any additional sensors, extra devices, or external data networks.

The user's needs must be in focus

The research also shows that if the user’s needs and preferences are prioritised, a navigation assistant can become more accessible and user-friendly, so that more people can use it with ease and confidence in their daily lives.

"One advantage of the solution is that it does not need to be connected to the internet," the researcher says. 

He explains that during the evaluation, they asked users how it felt to use DeepNAVI. Furthermore, they inquired whether they could trust a smartphone-based navigation assistant, and which one they preferred: DeepNAVI alone, a white cane alone, or a combination of DeepNAVI and a white cane?

The majority replied that they preferred the combination of DeepNAVI plus white cane.

They added that right now, they would not trust using just a smartphone assistant alone for navigation in public settings. 

However, they did acknowledge that their perception might change in the future and their trust in using DeepNAVI alone could grow with practice and familiarity, given its ability to provide more comprehensive information about the environment.

More control for the users can offer better accessibility

Kuriakose’s system is currently a research prototype. However, the knowledge gained through this research can be useful for further research in this area. 

His research received international recognition in the form of two best paper awards. 

“We still need to make some refinements to the system, and we will need interface designers to help create a fully functional and user-friendly smartphone solution,” he says.

Kuriakose explains that they expect the system can be further improved with the exciting advancements in artificial intelligence and the development of more advanced deep learning models. 

“Our future plans also involve providing users with greater control, allowing them to personalise the system based on their unique navigation preferences and styles. This will offer users more flexibility when using DeepNAVI,” Kuriakose says.

As an added benefit, the wide prevalence of smartphones, regardless of economic circumstances, makes this technology accessible to a significant proportion of the blind and partially sighted population. 

By eliminating the need for an additional navigation aid, individuals can rely on their smartphones as their primary navigation assistant, offering unprecedented convenience.


Kuriakose et al. DeepNAVI: A deep learning based smartphone navigation assistant for people with visual impairmentsExpert Systems with Applications, vol. 212, 2023. DOI: 10.1016/j.eswa.2022.118720

Kuriakose et al. Exploring the User Experience of an AI-based Smartphone Navigation Assistant for People with Visual ImpairmentsCHItaly '23: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter, 2023. DOI: 10.1145/3605390.3605421

Kuriakose et al. 'SceneRecog: Deep Learning Scene Recognition Model for Assisting Blind and Visually Impaired Navigate using Smartphones', 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 17-20 October 2021. (Abstract)

Kuriakose et al. Turn Left Turn Right - Delving type and modality of instructions in navigation assistant systems for people with visual impairmentsInternational Journal of Human-Computer Studies, vol. 179, 2023. DOI: 10.1016/j.ijhcs.2023.103098

Powered by Labrador CMS