Skip to: site menu  section menu  main content
My research interests include development of robust solution techniques for computational fluid dynamics, error estimation, computational geometry management, parallel computation, largescale model reduction, and design under uncertainty. Some current and recent research projects are:
Machine learning anisotropy detection
Hybridized and embedded discontinuous Galerkin methods
Outputbased error estimation and mesh adaptation
Adaptive RANS calculations with the discontinuous Galerkin method
Unsteady outputbased adaptation
Entropyadjoint approach to mesh refinement
Contaminant source inversion
Cutcell meshing
Nonlinear model reduction for inverse problems
Back to top
Numerical simulations require quality computational meshes, but the construction of an optimal mesh, one that maximizes accuracy for a given cost, is not trivial. In this work, we tackle and simplify one aspect of adaptive mesh generation, determination of anisotropy, which refers to directiondependent sizing of the elements in the mesh. Anisotropic meshes are important for efficiently resolving certain flow features, such as boundary layers, wakes, and shocks, that appear in computational fluid dynamics.
To predict the optimal mesh anisotropy, we use machine learning techniques, which have the potential to accurately and efficiently model responses of highlynonlinear problems over a wide range of parameters. We train a neural network on a large amount of data from a rigorous, but expensive, mesh optimization procedure (MOESS), and then attempt to reproduce this mapping from simpler solution features.
Ongoing research directions in this are include:
Relevant Publications
Krzysztof J. Fidkowski and Guodong Chen. A machinelearning anisotropy detection algorithm for outputadapted meshes. AIAA Paper 20200341, 2020. [ bib  .pdf ]
Back to topAlthough discontinuous Galerkin (DG) method have enabled highorder accurate computational fluid dynamics simulations, their memory footprint and computational costs remain large. Two approaches for reducing the expense of DG are (1) modifying the discretization; and (2) optimizing the computational mesh. We study both approaches and compare their relative benefits.
Hybridization of DG is an approach that modifies the highorder discretization to reduce its expense for a given mesh. The high cost of DG arises from the large number of degrees of freedom required to approximate an elementwise discontinuous highorder polynomial solution. These degrees of freedom are globallycoupled, increasing the memory requirements for solvers Hybridized discontinuous Galerkin (HDG) methods reduce the number of globallycoupled degrees of freedom by decoupling element solution approximations and stitching them together through weak flux continuity enforcement. HDG methods introduce face unknowns that become the only globallycoupled degrees of freedom in the system. Since the number of face unknowns is generally much lower than the number of element unknowns, HDG methods can be computationally cheaper and use less memory compared to DG. The embedded discontinuous Galerkin (EDG) methodis a particular type of HDG method in which the approximation space of face unknowns is continuous, further reducing the number of globallycoupled degrees of freedom.
We have developed mesh optimization approaches for hybridized discretizations. In addition to reducing computational costs, the resulting methods improve (1) robustness of the solution through quantitative error estimates, and (2) robustness of the solver through a mesh size continuation approach in which the problem is solved on successively finer meshes.
Ongoing research directions in this are include:
Relevant Publications
Krzysztof J. Fidkowski and Guodong Chen. Output‐based mesh optimization for hybridized and embedded discontinuous Galerkin methods. International Journal for Numerical Methods in Engineering, 121(5):867887, 2019. [ bib  DOI  .pdf ]
Krzysztof J. Fidkowski. A hybridized discontinuous Galerkin method on mapped deforming domains. Computers and Fluids, 139(5):8091, November 2016. [ bib  DOI  .pdf ]
Back to topComputational Fluid Dynamics (CFD) has become an indispensable tool for aerodynamic analysis and design. Driven by increasing computational power and improvements in numerical methods, CFD is at a state where threedimensional simulations of complex physical phenomena are now routine. However, such capability comes with a new liability: ensuring that the computed solutions are sufficiently accurate. CFD users, experts or not, cannot reliably manage this liability alone for complex simulations. The goal of this research is to develop methods that will assist users and improve the robustness of these simulations. The two key directions of these research are:
Relevant Publications:
Krzysztof J. Fidkowski and David L. Darmofal. Review of outputbased error estimation and mesh adaptation in computational fluid dynamics. AIAA Journal, 49(4):673694, 2011. [ bib  DOI  .pdf ]
Back to top
Relevant Publications and Presentations:
M.A. Ceze and K.J. Fidkowski. OutputDriven Anisotropic Mesh Adaptation for Viscous Flows Using Discrete Choice Optimization. AIAA Paper Number 20100170, 2010.
Back to top
Relevant Publications:
Krzysztof J. Fidkowski and Yuxing Luo. Outputbased spacetime mesh adaptation for the compressible NavierStokes equations. Journal of Computational Physics, 2011.
Back to topWhen only a handful of engineering outputs are of interest, the computational mesh can be tailored to predict those outputs well. The process requires solutions of auxiliary adjoint problems for each output that provide information on the sensitivity of the output to discretization errors in the mesh. This information guides mesh adaptation, so that after a few iterations of the process, the engineer receives an accurate solution along with error bars for the outputs of interest. However, the extra adjoint solutions add a nontrivial amount of computational work. It turns out for many equations, including NavierStokes, there exists one "free" adjoint solution that is related to the amount of entropy generated in the flow. This adjoint is obtained by a simple variable transformation and is therefore quite cheap to implement. An example case adapted using such an entropy adjoint, along with other adaptive indicators for comparison, is presented below. This indicator is particularly wellsuited for capturing vortex structures, such as those that persist for extended lengths in rotorcraft problems. Ongoing research is investigating the applicability of the entropy adjoint and to unsteady aerospace engineering simulations.
Relevant Publications:
K.J. Fidkowski, and P.L. Roe. An Entropy Adjoint Approach to Mesh Refinement. SIAM Journal on Scientific Computing, 32(3), 2010, pp 12611287.
K.J. Fidkowski, and P.L. Roe.
Entropybased Refinement I:
The Entropy Adjoint Approach
2009 AIAA Computational
Fluid Dynamics Conference, June 2009.
The scenario of interest in this project is that of a contaminant dispersed in an urban environment: the concentration diffuses and convects with the wind. The challenge is to use limited sensor measurements to reconstruct where the profile came from and where it is going. Such a largescale inverse problem quickly becomes intractable for realtime results that could be vital for decisionmaking. The animation to the right illustrates a forward simulation starting from one possible initial concentration  the forward problem alone took 1 hour to run on 32 processors. Two solution approaches are pursued in this project:

C. Lieberman, K. Fidkowski, K. Willcox, and B. van Bloemen Waanders. Hessianbased model reduction: largescale inversion and prediction. International Journal for Numerical Methods in Fluids, 2012. [ bib  DOI  .pdf ]
Back to topMesh generation around complex geometries can be one of the most timeconsuming and userintensive tasks in practical numerical computation. This is especially true when employing highorder methods, which demand coarse mesh elements that have to be shaped (i.e. curved) to represent surface features with an adequate level of accuracy. Requirements of positive element volumes and adequate geometry fidelity are difficult to enforce in standard boundary conforming meshes.
Boundaryconforming mesh  Cutcell mesh 
In cutcell meshing, the requirement that mesh elements conform to the geometry boundary is relaxed, allowing for simple volumefilling background meshes in which the geometry is submerged or "embedded". The airfoil figure on the right above shows an example of such a situation. The difficulty of boundaryconforming mesh generation has been exchanged for a cutting problem, in which arbitrarilyshaped cut cells arise from intersections between the background mesh elements and the geometry.
For the geometry, splines are used in 2D and curved triangular patches are used in 3D, as illustrated above. Key to the success of the DG highorder finite element is element integration rules, which are derived automatically using Green's theorem. Triangular and tetrahedral background elements are used as they can be stretched to resolve anisotropic features.
Shown above are Mach number contours from a subsonic Euler simulation around a wingbody configuration. 10,000 curved surface patches were used to represent the geometry and the final, solutionadapted background mesh for a p=2 solution contained 85,000 elements. Below are boundaryconforming and cutcell meshes from a viscous simulation over an airfoil. Anisotropic mesh refinement was driven by a drag output error estimate.
Boundaryconforming mesh  Cutcell mesh 
Relevant Publications and Presentations:
K.J. Fidkowski and D.L. Darmofal. A triangular cut–cell adaptive method for high–order discretizations of the compressible Navier–Stokes equations. Journal of Computational Physics. 225, 2007, pp 16531672.
K.J. Fidkowski and D.L. Darmofal. An adaptive simplex cut–cell method for discontinuous Galerkin discretizations of the Navier–Stokes equations. AIAA Paper Number 20073941, 2007.
Back to topIn model reduction, a large parameterdependent system of equations is replaced by a much smaller system that accurately approximates outputs over a certain range of parameters. Many systematic techniques exist for performing such reduction; this work used standard Galerkin projection with proper orthogonal decomposition (POD) for basis construction. To treat the nonlinearity efficiently, a maskedprojection technique (similar to gappy POD, missing point estimation, and coefficient function approximation) was used.
To demonstrate the model reduction technique, a scalar convectiondiffusionreaction problem was considered. The scenario consists of fuel injected into a combustion chamber and left to react with a surrounding oxidizer as it convects downstream. A 2D unsteady simulation is shown at left, for a pulsating injection concentration. Reduction of a steady 3D combustion chamber was performed in parallel, reducing the degrees of freedom (DOF) from 8.5 million to 40. Sample fuel concentration profiles are illustrated below. 
Full system: 8.5 million DOF, 13h CPU time  Reduced system: 40 DOF, negligible CPU time 
In these simulations, the outputs consisted of average fuel concentrations downstream, while the parameters were those entering into the nonlinear reaction rate expression. The parameters of interest remained adjustable in the reduced model, and the reduced model was verified to accurately reproduce outputs over a bounded input parameter set.
One application of such a reduced model is for solving inverse problems via a Bayesian inference approach. The inverse problem considered consisted of estimating reaction rate parameters from measured fuel concentrations. The small size of the reduced model made MarkovChain Monte Carlo (MCMC) sampling feasible (equivalent sampling with the full system would take almost 8 years of CPU time). The MCMC sample histories for two reaction rate parameters and the resulting histograms after 5000 samples are shown below.
MCMC samples  Posterior histogram 
Relevant Publications and Presentations:
D. Galbally, K. Fidkowski, K. Willcox, and O. Ghattas, Nonlinear Model Reduction for Uncertainty Quantification in LargeScale Inverse Problems. International Journal for Numerical Methods in Engineering. 81(12), 2009, pp 15811603.
Nonlinear Model Reduction for
Uncertainty Quantification in LargeScale Inverse
Problems
Computational Aerospace Sciences Seminar,
October 2008.