Measuring and Predicting Driver Situation Awareness
| dc.contributor.advisor | Boyle, Linda Ng | |
| dc.contributor.author | Xing, Yilun | |
| dc.date.accessioned | 2024-09-09T23:10:55Z | |
| dc.date.available | 2024-09-09T23:10:55Z | |
| dc.date.issued | 2024-09-09 | |
| dc.date.submitted | 2024 | |
| dc.description | Thesis (Ph.D.)--University of Washington, 2024 | |
| dc.description.abstract | Situation awareness (SA) encompasses the perception, comprehension, and projection of elements within a given situation, aligning with the three levels of SA. It plays a critical role in driver safety, as drivers must continuously maintain awareness of dynamic road conditions, significantly influencing traffic crash rates associated with human error. Advanced driver assistance systems (ADAS) are designed to enhance driving performance. However, overly complex or ambiguous information may overwhelm drivers' limited SA capacity. To address this challenge, ADAS models should consider the operator's perception and understanding of the environment to tailor assistance effectively. This dissertation proposes methods to measure and predict driver SA, focusing on objects of interest within the scene. Initially, an experimental approach utilizing real-world driving videos and a web-based touch recorder was introduced to capture driver SA, which demonstrates efficacy in capturing various levels of driver SA. Furthermore, it was observed that drivers often rely on the memory of element trajectories to understand or predict the location of OOIs. Subsequent analysis identified the impacts of environmental features, object characteristics, and driver demographics on driver SA. Incorporating these feature groups into predictive models proved reasonable, with environmental factors such as the number of objects in the scene, scene visual complexity and roadway type, object features such as object size and type, and driver demographics such as gender showing significant impacts on driver SA. Next, gaze-point-based and visual sensory-dependent features were processed from the eye-tracking data. Predictive models incorporating different feature groups including environmental, object, and driver features, and those extracted from the eye-tracking data were fitted and compared. Two phases of SA were distinguished: object localization and recognition. Binary classification models were developed and rigorously evaluated for each phase. Recognizing the impracticality of drivers wearing eye-trackers during daily driving, an alternative for eye-tracking data, which utilizes computer vision to estimate visual attention from forward-view driving videos was proposed. Gaze-related features extracted from these videos demonstrated comparable performance to those from eye-tracking data, suggesting their viability for predicting driver SA. This insight could inform the design of ADAS systems, enabling low-cost selective assistance to drivers and ultimately enhancing driver safety. | |
| dc.embargo.terms | Open Access | |
| dc.format.mimetype | application/pdf | |
| dc.identifier.other | Xing_washington_0250E_26816.pdf | |
| dc.identifier.uri | https://hdl.handle.net/1773/52066 | |
| dc.language.iso | en_US | |
| dc.rights | CC BY-NC | |
| dc.subject | Driver Situation Awareness | |
| dc.subject | Environmental Complexity | |
| dc.subject | Eye Tracking Data | |
| dc.subject | Gaze Point Estimation | |
| dc.subject | Saliency Estimation | |
| dc.subject | Industrial engineering | |
| dc.subject.other | Industrial engineering | |
| dc.title | Measuring and Predicting Driver Situation Awareness | |
| dc.type | Thesis |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Xing_washington_0250E_26816.pdf
- Size:
- 9.76 MB
- Format:
- Adobe Portable Document Format
