Skip to: site menu  section menu  main content
My research interests include development of robust solution techniques for computational fluid dynamics, error estimation, computational geometry management, parallel computation, largescale model reduction, and design under uncertainty. Some current and recent research projects are:
Outputbased error estimation and mesh adaptation
Adaptive RANS calculations with the discontinuous Galerkin method
Unsteady outputbased adaptation
Entropyadjoint approach to mesh refinement
Contaminant source inversion
Cutcell meshing
Nonlinear model reduction for inverse problems
Back to top
Computational Fluid Dynamics (CFD) has become an indispensable tool for aerodynamic analysis and design. Driven by increasing computational power and improvements in numerical methods, CFD is at a state where threedimensional simulations of complex physical phenomena are now routine. However, such capability comes with a new liability: ensuring that the computed solutions are sufficiently accurate. CFD users, experts or not, cannot reliably manage this liability alone for complex simulations. The goal of this research is to develop methods that will assist users and improve the robustness of these simulations. The two key directions of these research are:
Relevant Publications and Presentations:
K.J. Fidkowski, and D.L. Darmofal. OutputBased Error Estimation and Mesh Adaptation in Computational Fluid Dynamics: Overview and Recent Results. AIAA Journal 2010, Accepted.
OutputBased Error
Estimation and Mesh Adaptation in Computational Fluid
Dynamics: Overview and Recent Results
2009 AIAA
Aerospace Sciences Meeting, January 2009.
Relevant Publications and Presentations:
M.A. Ceze and K.J. Fidkowski. OutputDriven Anisotropic Mesh Adaptation for Viscous Flows Using Discrete Choice Optimization. AIAA Paper Number 20100170, 2010.
Back to top
Relevant Publications and Presentations:
In progress
Back to topWhen only a handful of engineering outputs are of interest, the computational mesh can be tailored to predict those outputs well. The process requires solutions of auxiliary adjoint problems for each output that provide information on the sensitivity of the output to discretization errors in the mesh. This information guides mesh adaptation, so that after a few iterations of the process, the engineer receives an accurate solution along with error bars for the outputs of interest. However, the extra adjoint solutions add a nontrivial amount of computational work. It turns out for many equations, including NavierStokes, there exists one "free" adjoint solution that is related to the amount of entropy generated in the flow. This adjoint is obtained by a simple variable transformation and is therefore quite cheap to implement. An example case adapted using such an entropy adjoint, along with other adaptive indicators for comparison, is presented below. This indicator is particularly wellsuited for capturing vortex structures, such as those that persist for extended lengths in rotorcraft problems. Ongoing research is investigating the applicability of the entropy adjoint and to unsteady aerospace engineering simulations.
Relevant Publications and Presentations:
K.J. Fidkowski, and P.L. Roe. An Entropy Adjoint Approach to Mesh Refinement. SIAM Journal on Scientific Computing, 32(3), 2010, pp 12611287.
Entropybased Refinement I:
The Entropy Adjoint Approach
2009 AIAA Computational
Fluid Dynamics Conference, June 2009.
The scenario of interest in this project is that of a contaminant dispersed in an urban environment: the concentration diffuses and convects with the wind. The challenge is to use limited sensor measurements to reconstruct where the profile came from and where it is going. Such a largescale inverse problem quickly becomes intractable for realtime results that could be vital for decisionmaking. The animation to the right illustrates a forward simulation starting from one possible initial concentration  the forward problem alone took 1 hour to run on 32 processors. Two solution approaches are pursued in this project:

Relevant Publications and Presentations:
In progress
Back to topMesh generation around complex geometries can be one of the most timeconsuming and userintensive tasks in practical numerical computation. This is especially true when employing highorder methods, which demand coarse mesh elements that have to be shaped (i.e. curved) to represent surface features with an adequate level of accuracy. Requirements of positive element volumes and adequate geometry fidelity are difficult to enforce in standard boundary conforming meshes.
Boundaryconforming mesh  Cutcell mesh 
In cutcell meshing, the requirement that mesh elements conform to the geometry boundary is relaxed, allowing for simple volumefilling background meshes in which the geometry is submerged or "embedded". The airfoil figure on the right above shows an example of such a situation. The difficulty of boundaryconforming mesh generation has been exchanged for a cutting problem, in which arbitrarilyshaped cut cells arise from intersections between the background mesh elements and the geometry.
For the geometry, splines are used in 2D and curved triangular patches are used in 3D, as illustrated above. Key to the success of the DG highorder finite element is element integration rules, which are derived automatically using Green's theorem. Triangular and tetrahedral background elements are used as they can be stretched to resolve anisotropic features.
Shown above are Mach number contours from a subsonic Euler simulation around a wingbody configuration. 10,000 curved surface patches were used to represent the geometry and the final, solutionadapted background mesh for a p=2 solution contained 85,000 elements. Below are boundaryconforming and cutcell meshes from a viscous simulation over an airfoil. Anisotropic mesh refinement was driven by a drag output error estimate.
Boundaryconforming mesh  Cutcell mesh 
Relevant Publications and Presentations:
K.J. Fidkowski and D.L. Darmofal. A triangular cut–cell adaptive method for high–order discretizations of the compressible Navier–Stokes equations. Journal of Computational Physics. 225, 2007, pp 16531672.
K.J. Fidkowski and D.L. Darmofal. An adaptive simplex cut–cell method for discontinuous Galerkin discretizations of the Navier–Stokes equations. AIAA Paper Number 20073941, 2007.
Back to topIn model reduction, a large parameterdependent system of equations is replaced by a much smaller system that accurately approximates outputs over a certain range of parameters. Many systematic techniques exist for performing such reduction; this work used standard Galerkin projection with proper orthogonal decomposition (POD) for basis construction. To treat the nonlinearity efficiently, a maskedprojection technique (similar to gappy POD, missing point estimation, and coefficient function approximation) was used.
To demonstrate the model reduction technique, a scalar convectiondiffusionreaction problem was considered. The scenario consists of fuel injected into a combustion chamber and left to react with a surrounding oxidizer as it convects downstream. A 2D unsteady simulation is shown at left, for a pulsating injection concentration. Reduction of a steady 3D combustion chamber was performed in parallel, reducing the degrees of freedom (DOF) from 8.5 million to 40. Sample fuel concentration profiles are illustrated below. 
Full system: 8.5 million DOF, 13h CPU time  Reduced system: 40 DOF, negligible CPU time 
In these simulations, the outputs consisted of average fuel concentrations downstream, while the parameters were those entering into the nonlinear reaction rate expression. The parameters of interest remained adjustable in the reduced model, and the reduced model was verified to accurately reproduce outputs over a bounded input parameter set.
One application of such a reduced model is for solving inverse problems via a Bayesian inference approach. The inverse problem considered consisted of estimating reaction rate parameters from measured fuel concentrations. The small size of the reduced model made MarkovChain Monte Carlo (MCMC) sampling feasible (equivalent sampling with the full system would take almost 8 years of CPU time). The MCMC sample histories for two reaction rate parameters and the resulting histograms after 5000 samples are shown below.
MCMC samples  Posterior histogram 
Relevant Publications and Presentations:
D. Galbally, K. Fidkowski, K. Willcox, and O. Ghattas, Nonlinear Model Reduction for Uncertainty Quantification in LargeScale Inverse Problems. International Journal for Numerical Methods in Engineering. 81(12), 2009, pp 15811603.
Nonlinear Model Reduction for
Uncertainty Quantification in LargeScale Inverse
Problems
Computational Aerospace Sciences Seminar,
October 2008.