Vicariously Self-Adaptive Re: Sight for the Visually Impaired Using Spatial Data Mining

Authors

  • Arun Kumar P Student, Computer Science & Engineering, Panimalar Institute of Technology, Chennai, India
  • Matthew Immanuel Samson Student, Computer Science & Engineering, Panimalar Institute of Technology, Chennai, India
  • Prakash K Student, Computer Science & Engineering, Panimalar Institute of Technology, Chennai, India
  • Kalaichelvi T Professor, Computer Science & Engineering, Panimalar Institute of Technology, Chennai, India

Keywords:

Spatial Data Mining, Neural Networks, Image Processing, Raspberry Pi, Internet of Things, Ultrasonic Sensors, GPS

Abstract

Visually challenged mostly depends on the Braille language for reading textual
documents and on canes for travel from place to place. They require much guidance in their daily
activities thus preventing them from living independently. By providing them with virtual visibility of
the environment, the proposed system allows them to “see” vicariously and live an independent
lifestyle. The system applies Artificial Intelligence (AI) by implementing concepts of Learning and
Computer Vision through latest trends in technology such as Clustering Large Applications based on
RANdomized Search (CLARANS) with Spatial Data Mining using R language and Pattern
Recognition for text, obstacles and specific sign boards using Neuro Optical Character Recognition
(NeuroOCR) respectively and delivers output as audio descriptions. The smart kit contains a mobile
device for speech to text conversion, the Raspberry Pi that serves as the processing system along with
GPS, a High Definition (HD) Camera, an earphone and a bus module device with GPS. The design
involves object or obstacle detection algorithm through Ultrasonic (US) sensors and textual, boards
and signs recognition using NeuroOCR algorithm. The main aspect being CLARANS as it provides a
safe, guided and intelligent navigation system that implements concepts of learning frequent routes
along with obstacles and bus transit route details which retrieve required bus routes and their
respective distances from the cloud, all come together to create a virtualized vision for the visually
challenged.

Published

2018-03-25

How to Cite

Arun Kumar P, Matthew Immanuel Samson, Prakash K, & Kalaichelvi T. (2018). Vicariously Self-Adaptive Re: Sight for the Visually Impaired Using Spatial Data Mining. International Journal of Advance Research in Engineering, Science & Technology, 5(3), 480–487. Retrieved from https://ijarest.org/index.php/ijarest/article/view/1237