Total for the last 12 months
number of access : ?
number of downloads : ?
ID 118053
Author
Sonom-Ochir, Ulziibayar The University of Tokushima
Ayush, Altangerel Mongolian University of Science and Technology
Keywords
Visual distraction
Gaze mapping
Moving object
Gaze region
Content Type
Journal Article
Description
Most serious accidents are caused by the driver’s visual distraction. Therefore, early detection of a driver’s visual distraction is very important. The detection system mostly used is the dashboard camera because it is cheap and convenient. However, some studies have focused on various methods using additional equipment such as vehicle-mounted devices, wearable devices, and specific cameras that are common. However, these proposals are expensive. Therefore, the main goal of our research is to create a low-cost, non-intrusive, and lightweight driver’s visual distraction detection (DVDD) system using only a simple dual dashboard camera. Currently, most research has focused only on tracking and estimating the driver’s gaze. In our study, additionally, we also aim to monitor the road environment and then evaluate the driver’s visual distraction detection based on the two pieces of information. The proposed system has two main modules: 1) gaze mapping and 2) moving object detection. The gaze mapping module receives video captured through a camera placed in front of the driver, and then predicts a driver’s gaze direction to one of predefined 16 gaze regions. Concurrently, the moving object detection module identifies the moving objects from the front view and determines in which part of the predefined 16 gaze regions it appears. By combining and evaluating the two modules, the state of the distraction of the driver can be estimated. If the two module outputs are different gaze regions or non-neighbor gaze regions, the system considers that the driver is visually distracted and issues a warning. We conducted experiments based on our self-built real-driving DriverGazeMapping dataset. In the gaze mapping module, we compared the two methods MobileNet and OpenFace with the SVM classifier. The two methods outperformed the baseline gaze mapping module. Moreover, in the OpenFace with SVM classifier method, we investigated which features extracted by OpenFace affected the performance of the gaze mapping module. Of these, the most effective feature was the combination of a gaze angle and head position_R features. The OpenFace with SVM method using gaze angle and head position_R features achieved a 6.25% higher accuracy than the method using MobileNet. Besides, the moving object detection module using the Lukas-Kanade dense method was faster and more reliable than in the previous study in our experiments.
Journal Title
International Journal of Innovative Computing, Information and Control
ISSN
13494198
NCID
AA12218449
Publisher
ICIC International
Volume
18
Issue
5
Start Page
1445
End Page
1461
Published Date
2022-10
EDB ID
DOI (Published Version)
URL ( Publisher's Version )
FullText File
language
eng
TextVersion
Publisher
departments
Science and Technology