Selecting Robust Strategies in RTS Games via Concurrent Plan Augmentation
MetadataShow full item record
The multifaceted complexity of real-time strategy (RTS) games requires AI systems to break down policy computation into smaller subproblems such as strategic planning, tactical planning, and reactive control. To further simplify planning at the strategic and tactical levels, state-of-the-art automatic techniques such as case-based planning (CBP) produce deterministic plans. For what is inherently an uncertain environment CBP plans rely on replanning when the game situation digresses from the constructed plan. A major weakness of this approach is its lack of robust adaptability: repairing a failed plan is often impossible or infeasible due to real-time computational constraints resulting in a game loss. This thesis presents a technique that selects a robust RTS game strategy using ideas from contingency planning and exploiting action concurrency in strategy games. Specifically, starting with a strategy and a linear tactical plan that realizes it, our algorithm identifies a plan's failure modes using available game traces and adds concurrent branches to it so that these failure modes are mitigated. For example, approach may train an army reserve concurrently with an attack on the enemy, as defense against a possible counterattack. After augmenting each strategy from an available library (e.g., learned from human demonstration) our approach picks one with the most robust augmented tactical plan. Extensive evaluation on popular RTS games (StarCraft and Wargus) that share engines with other games indicates concurrent augmentation significantly improves win-rate where baseline strategy selection consistently lead to a loss.