Computational Modeling of Hypersonic Turbulent Boundary Layers By Using Machine Learning
thesisposted on 31.07.2020, 15:31 by Abhinand Ayyaswamy
A key component of research in the aerospace industry constitutes hypersonic flights (M>5) which includes the design of commercial high-speed aircrafts and development of rockets. Computational analysis becomes more important due to the difficulty in performing experiments and reliability of its results at these harsh operating conditions. There is an increasing demand from the industry for the accurate prediction of wall-shear and heat transfer with a low computational cost. Direct Numerical Simulations (DNS) create the standard for accuracy, but its practical usage is difficult and limited because of its high cost of computation. The usage of Reynold's Averaged Navier Stokes (RANS) simulations provide an affordable gateway for industry to capitalize its lower computational time for practical applications. However, the presence of existing RANS turbulence closure models and associated wall functions result in poor prediction of wall fluxes and inaccurate solutions in comparison with high fidelity DNS data. In recent years, machine learning emerged as a new approach for physical modeling. This thesis explores the potential of employing Machine Learning (ML) to improve the predictions of wall fluxes for hypersonic turbulent boundary layers. Fine-grid RANS simulations are used as training data to construct a suitable machine learning model to improve the solutions and predictions of wall quantities for coarser meshes. This strategy eliminates the usage of wall models and extends the range of applicability of grid sizes without a significant drop in accuracy of solutions. Random forest methodology coupled with a bagged aggregation algorithm helps in modeling a correction factor for the velocity gradient at the first grid points. The training data set for the ML model extracted from fine-grid RANS, includes neighbor cell information to address the memory effect of turbulence, and an optimal set of parameters to model the gradient correction factor. The successful demonstration of accurate predictions of wall-shear for coarse grids using this methodology, provides the confidence to build machine learning models to use DNS or high-fidelity modeling results as training data for reduced-order turbulence model development. This paves the way to integrate machine learning with RANS to produce accurate solutions with significantly lesser computational costs for hypersonic boundary layer problems.