直近一年間の累計
アクセス数 : ?
ダウンロード数 : ?
ID 118053
著者
Sonom-Ochir, Ulziibayar The University of Tokushima
Ayush, Altangerel Mongolian University of Science and Technology
キーワード
Visual distraction
Gaze mapping
Moving object
Gaze region
資料タイプ
学術雑誌論文
抄録
Most serious accidents are caused by the driver’s visual distraction. Therefore, early detection of a driver’s visual distraction is very important. The detection system mostly used is the dashboard camera because it is cheap and convenient. However, some studies have focused on various methods using additional equipment such as vehicle-mounted devices, wearable devices, and specific cameras that are common. However, these proposals are expensive. Therefore, the main goal of our research is to create a low-cost, non-intrusive, and lightweight driver’s visual distraction detection (DVDD) system using only a simple dual dashboard camera. Currently, most research has focused only on tracking and estimating the driver’s gaze. In our study, additionally, we also aim to monitor the road environment and then evaluate the driver’s visual distraction detection based on the two pieces of information. The proposed system has two main modules: 1) gaze mapping and 2) moving object detection. The gaze mapping module receives video captured through a camera placed in front of the driver, and then predicts a driver’s gaze direction to one of predefined 16 gaze regions. Concurrently, the moving object detection module identifies the moving objects from the front view and determines in which part of the predefined 16 gaze regions it appears. By combining and evaluating the two modules, the state of the distraction of the driver can be estimated. If the two module outputs are different gaze regions or non-neighbor gaze regions, the system considers that the driver is visually distracted and issues a warning. We conducted experiments based on our self-built real-driving DriverGazeMapping dataset. In the gaze mapping module, we compared the two methods MobileNet and OpenFace with the SVM classifier. The two methods outperformed the baseline gaze mapping module. Moreover, in the OpenFace with SVM classifier method, we investigated which features extracted by OpenFace affected the performance of the gaze mapping module. Of these, the most effective feature was the combination of a gaze angle and head position_R features. The OpenFace with SVM method using gaze angle and head position_R features achieved a 6.25% higher accuracy than the method using MobileNet. Besides, the moving object detection module using the Lukas-Kanade dense method was faster and more reliable than in the previous study in our experiments.
掲載誌名
International Journal of Innovative Computing, Information and Control
ISSN
13494198
cat書誌ID
AA12218449
出版者
ICIC International
18
5
開始ページ
1445
終了ページ
1461
発行日
2022-10
EDB ID
出版社版DOI
出版社版URL
フルテキストファイル
言語
eng
著者版フラグ
出版社版
部局
理工学系