Modeling driver behavior and their interactions with driver assistance systems
As vehicle automation becomes increasingly prevalent and capable, drivers have the opportunity to delegate primary driving task control to automated systems. In recent years, significant efforts have been placed on developing and deploying Advanced Driver Assistance Systems (ADAS). These systems are designed to work with human drivers to increase vehicle safety, control, and performance in both ordinary and emergent situations. Current ADAS are mainly presented in rule-based or manually programmed design based on the summary and modeling of pre-collected human performance data. However, the pre-fixed system with limited personalization may not match human drivers' needs, which may arise the driver's dissatisfaction and cause ineffective system improvement. Human-centered machine learning (HCML) includes explicitly recognizing this human operator's role, as well as re-constructing machine learning workflows based on human working practices. The goal of this dissertation is to build a novel driver behavior modeling framework to understand and predict interactions with the driver assistance system from a human-centered perspective. It can lead not only to more usable machine learning tools but to new ways of improving the driver assistance systems. A driving simulator study was conducted to evaluate drivers' interactions with Forward Collision Warning (FCW) system. Gaussian Mixture Model (GMM) clusterization was used to identify different driving styles based drivers' driving performance, secondary task engagement, eye glance behavior and survey information. The impact of the FCW system on the different driving styles was also evaluated and discussed from three perspectives: initial reaction, distraction types, and safety benefits. A driver behavior model was also built using inverse reinforcement learning. Lastly, the timing prediction of FCW using driving preference was compared to the algorithm from a traditional FCW system. The findings of this study showed that ADAS without human feedback may not always bring positive safety benefits. Learning driver's preference through inverse reinforcement learning could better account for future scenarios and better predict driver behavior (e.g., braking action). This algorithm can be incorporated into real world in-vehicle warning systems such that the feedback and driving styles of the human operator are appropriately considered.