Behavior Modeling and Motion Planning for Autonomous Driving using Artificial Intelligence

dc.contributor.advisorWang, Yinhai
dc.contributor.authorZhu, Meixin
dc.date.accessioned2022-09-23T20:43:59Z
dc.date.issued2022-09-23
dc.date.submitted2022
dc.descriptionThesis (Ph.D.)--University of Washington, 2022
dc.description.abstractWith an emphasis on longitudinal driving, this dissertation aims to develop data-driven models that improve existing driving behavior models and facilitate various kinds of autonomous driving planning. The first part of this work focuses on behavior modeling, which falls within the background of microscopic traffic simulation, traffic flow theory, and motion prediction. Two different driving behavior models are proposed. To model the long-term dependency of future actions on historical driving situations, a long-sequence car-following trajectory prediction model is developed using the attention-based Transformer model. The model follows a general format of encoder-decoder architecture. The encoder takes historical speed and spacing data as inputs and forms a mixed representation of historical driving context using multi-head self-attention. The decoder takes the future lead vehicle speed profile as input and outputs the predicted future following speed profile in a generative way (instead of an auto-regressive way, avoiding compounding errors). The second part of this work extends the single forward-pass of behavior prediction in the first part to the sequential motion planning of autonomous driving. Based on different demands, two motion planning algorithms are proposed for autonomous longitudinal driving. To learn a driving policy that can do closed-loop sequential planning and imitate human drivers' behavior, a framework for human-like autonomous car-following planning based on deep reinforcement learning (RL) is proposed. Car-following dynamics are encoded into a simulation environment, and a reward function that signals how much the agent deviates from the empirical data is used to encourage behavioral imitation. It was found that using RL for imitation learning purposes can well address the distribution shift issue. This is the first study that uses RL to address the distribution shift issue for imitation-orientated longitudinal motion planning. To propose a safe, efficient, and comfortable velocity planning method for autonomous driving, a multi-objective velocity planning method based on RL is proposed. To directly optimize driving performance, a reward function is developed by referencing human driving data and combining driving features related to safety, efficiency, and comfort. It was found that the proposed model demonstrates the capability of safe, efficient, and comfortable velocity control and outperforms human drivers.
dc.embargo.lift2027-08-28T20:43:59Z
dc.embargo.termsRestrict to UW for 5 years -- then make Open Access
dc.format.mimetypeapplication/pdf
dc.identifier.otherZhu_washington_0250E_24718.pdf
dc.identifier.urihttp://hdl.handle.net/1773/49291
dc.language.isoen_US
dc.rightsnone
dc.subjectArtificial Intelligence
dc.subjectAutonomous Driving
dc.subjectCar Following
dc.subjectDriving Behavior
dc.subjectMotion Planning
dc.subjectTraffic Simulation
dc.subjectTransportation
dc.subjectAutomotive engineering
dc.subjectArtificial intelligence
dc.subject.otherCivil engineering
dc.titleBehavior Modeling and Motion Planning for Autonomous Driving using Artificial Intelligence
dc.typeThesis

Files