Dimensionality hyper-reduction and machine learning for dynamical systems with varying parameters
MetadataShow full item record
This work demonstrates methods for hyper reduction and efficient computation of solutions of dynamical systems using optimization and machine learning techniques. We consider nonlinear partial differential equations that under discretization yield a system of nonlinear differential equations. Discretization can be performed using finite element techniques, spectral methods, finite-difference methods, or in the case of conservation laws, finite-volume methods. However, such complex systems generate very large-scale problems leading to very long simulation times. Therefore, these models are not practical to use for real-time scenarios. Reduced order models simplify this type of computation and often assure significant speedups for these types of problems. One such method is described in this work. We demonstrate the synthesis of sparse sampling and machine learning to characterize and model complex, nonlinear dynamical systems over a range of bifurcation parameters. It is shown that nearly optimal sensor placement locations can be achieved by applying a genetic algorithm on a generalization of the discrete empirical interpolation method with varying parameters. On the other hand, there is no guarantee that these methods preserve the problem structure which can lead to instability and inaccuracy. In this work this issue was addressed for systems that are solved with finite volume methods. This was done by formulating a constrained optimization problem and solving it nearly as efficiently as the unconstrained problem. The introduced constraints insure that the most important properties of the model are preserved despite the fact that the dimensionality of the system was greatly reduced. Exemplary results are computed for Euler equation for an inviscid compressible flow and the complex Ginzburg-Landau equation.
- Applied mathematics