Developing a Cost-Effective Bus-to-Pedestrian Near-Miss Detection Method Using Onboard Video Camera Data
MetadataShow full item record
Bus-to-pedestrian near-miss data are important surrogate safety data for further pedestrian-related traffic safety studies. However, there is limited existing work on automatically extracting bus-to-pedestrian near-miss data from onboard cameras. This project fills the gap by proposing a framework to automatically detect bus-to-pedestrian near-misses through an onboard monocular vision system with real-time processing speed. The proposed detection framework has a different processing logic than previous vehicle-to-pedestrian conflict studies. First, our framework does not handle the complex background information in the moving onboard video directly. Instead, it tries to locate the pedestrians on the basis of the vision pattern. After the pedestrian has been detected and tracked, the calculation occurs in 3D, real-world coordinates instead of 2D image coordinates, as in previous studies. In the 2D image space no real-world value can be obtained. Specifically, our framework has four main stages: pedestrian detection with onboard video, motion estimation in image coordinates, relative position and speed calculation in real-world coordinates, and near-miss detection. In the first stage, the well-known Histogram-of-gradient pedestrian detector is used to detect pedestrians within the camera’s vision. In the second stage, interest points inside the detected bounding box of a pedestrian are tracked with the sparse optical flow method. Thus, the motion of the pedestrian in the image coordinates can be estimated. In the third stage, with several camera parameters known and the assumption that the detected pedestrian is on the same plane as the vehicle, the pedestrian’s position and speed relative to the vehicle in the 3D real-world coordinates can be calculated. In the fourth stage, several near-miss indicators are used to determine whether a vehicle-to-pedestrian near-miss event is possible. In comparison with events detected by a commercial system called MobilEye Shield+ that had multiple camera sensors installed, the results turned out to be reasonably good. We ran the system with over one-month of data, and the overall performance was promising. Over 30 hours of data were examined in detail for quantified evaluation purposes. The system processed in a nearly real-time manner and yielded an over 85 percent detection overlap rate with the events extracted by the MobilEye Shield+ system. With the findings and accomplishments in this project, a much larger number of bus-to-pedestrian conflict data are expected to be collected from onboard videos to support and advance future traffic safety research.