ResearchWorks Archive
    • Login
    View Item 
    •   ResearchWorks Home
    • Dissertations and Theses
    • Civil engineering
    • View Item
    •   ResearchWorks Home
    • Dissertations and Theses
    • Civil engineering
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Behavior Modeling and Motion Planning for Autonomous Driving using Artificial Intelligence

    Thumbnail
    View/Open
    Zhu_washington_0250E_24718.pdf (36.96Mb)
    Author
    Zhu, Meixin
    Metadata
    Show full item record
    Abstract
    With an emphasis on longitudinal driving, this dissertation aims to develop data-driven models that improve existing driving behavior models and facilitate various kinds of autonomous driving planning. The first part of this work focuses on behavior modeling, which falls within the background of microscopic traffic simulation, traffic flow theory, and motion prediction. Two different driving behavior models are proposed. To model the long-term dependency of future actions on historical driving situations, a long-sequence car-following trajectory prediction model is developed using the attention-based Transformer model. The model follows a general format of encoder-decoder architecture. The encoder takes historical speed and spacing data as inputs and forms a mixed representation of historical driving context using multi-head self-attention. The decoder takes the future lead vehicle speed profile as input and outputs the predicted future following speed profile in a generative way (instead of an auto-regressive way, avoiding compounding errors). The second part of this work extends the single forward-pass of behavior prediction in the first part to the sequential motion planning of autonomous driving. Based on different demands, two motion planning algorithms are proposed for autonomous longitudinal driving. To learn a driving policy that can do closed-loop sequential planning and imitate human drivers' behavior, a framework for human-like autonomous car-following planning based on deep reinforcement learning (RL) is proposed. Car-following dynamics are encoded into a simulation environment, and a reward function that signals how much the agent deviates from the empirical data is used to encourage behavioral imitation. It was found that using RL for imitation learning purposes can well address the distribution shift issue. This is the first study that uses RL to address the distribution shift issue for imitation-orientated longitudinal motion planning. To propose a safe, efficient, and comfortable velocity planning method for autonomous driving, a multi-objective velocity planning method based on RL is proposed. To directly optimize driving performance, a reward function is developed by referencing human driving data and combining driving features related to safety, efficiency, and comfort. It was found that the proposed model demonstrates the capability of safe, efficient, and comfortable velocity control and outperforms human drivers.
    URI
    http://hdl.handle.net/1773/49291
    Collections
    • Civil engineering [377]

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us
    Theme by 
    @mire NV
     

     

    Browse

    All of ResearchWorksCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    LoginRegister

    DSpace software copyright © 2002-2015  DuraSpace
    Contact Us
    Theme by 
    @mire NV