Workshop on Multiscale Modeling

Workshop on Multiscale Modeling and its Applications: From Weather and Climate Models to Models of Materials Defects

April 25 - 29, 2016, The Fields Institute


1. Christiane Jablonowski (University of Michigan)

Abstract

The talk reviews two approaches to high-order variable-resolution modeling that have recently been designed for atmospheric weather and climate models. The first approach is based on the Adaptive Mesh Refinement (AMR) library Chombo that supports fourth-order finite volume methods for block-structured adaptive meshes on cubed-sphere grids. The Chombo-AMR model has been jointly developed by the Lawrence Berkeley National Laboratory and the University of Michigan. The second variable-resolution grid approach is based on the Spectral Element (SE) method that has been implemented on a cubed-sphere grid in the Community Atmosphere Model (CAM). The latter has been jointly developed by the National Center for Atmospheric Research (NCAR) and various Department of Energy laboratories.

The talk discusses the characteristics of both variable-resolution mesh techniques using a hierarchy of test cases and fluid flow scenarios. In particular, the AMR-Chombo model is evaluated in the 2D shallow-water framework, and various refinement criteria are compared. The CAM-SE model is first assessed in a dry 3D dynamical core mode. In addition, a water-covered Earth (aqua-planet) configuration and realistic model setups with topography are tested. Tropical cyclones serve as the main motivating example and highlight the scientific potential of the variable-resolution mesh approach. Special attention is paid to the flow conditions in the grid-resolution transition regions that have the potential to exhibit grid imprinting. It is shown that the high-order numerical methods successfully suppress spurious noise without the need for special diffusive mechanisms in the grid transition zones.

Notes

  • Chombo - dynamic Block-structured AMR
  • Or CAM model - static refinement
  • 4th order and multi-block refinement
  • Chombo-AMR - subcycle and interpolate between levels during temporal integration
    • Potentially may only yield first order at finer levels
    • Question: convergence rate ?
    • Why use multi-level block refinement ? Do you use multigrid for acceleration ?
  • Numerics is consistent i.e., with UMR and AMR at the same resolution, error is the same
    • But what about computational aspects ?
  • More errors along the cubed sphere surface interface
    • Question: Why ?
  • Interestingly $L_{2}$ error is consistent (4th order) but $L_{\infty}$ is not due to a particular point polluting the solutoin and propagating it during temporal integration
  • Computationlly, AMR has a long way to go wrt matching efficiency of statically refined meshes
  • Question: Would there be a big advantage with unstructured meshes and locally adaptive meshing ?

2. Kyle Mandli, Columbia University (NY)

Abstract

Coastal hazards are a growing problem worldwide due to not only the current and projected sea-level rise but also due to increasing populations and economic dependence on coastal areas. Today, coastal hazards related to strong storms are one of the most frequently recurring and wide spread hazards to coastal communities today. In particular storm surge, the rise of the sea surface in response to wind and pressure forcing from these storms, can have a devastating effect on the coastline. Furthermore, with the addition of climate change related effects, the ability to predict these events quickly and accurately is critical to the protection and sustainability of these coastal areas.

Computational approaches to this problem must be able to handle its multi-scale nature while remaining computationally tractable and physically relevant. This has commonly been accomplished by solving a depth-averaged sets of fluid equations and by employing non-uniform and unstructured grids. These approaches, however, have often had shortcomings due to computational expense, the need for involved model tuning, and missing physics. Additionally, to answer some of the pressing questions regarding mitigation strategies, the ability to represent the relevant scales of protective structures (on the scale of meters) to oceanic basins (hundreds of kilometers) is critical.

In this talk, I will outline some of the approaches we are developing to address several of these shortcomings and address the multi-scale issues inherent in the problem. These approaches include adaptive mesh refinement, embedded physics and cut-cell discretizations, and more accurate model equations such as the two-layer shallow water equations. Combining these new approaches promises to address some of the problems in current state-of-the-art models while continuing to decrease the computational overhead needed to calculate a forecast or climate scenario.

Notes

  • Lot of high resolution models needed
  • Typically need feedback within 30 minutes for real prediction and forecast
  • So involves approximations; NOAA runs about 60000 perturbations every hour
  • Base idea is that there are several important issues that still need resolution

This was a good talk, interactive; Take-away is that storm modeling has a need for efficient methods even at a coarser scale.


3. Hilary Weller, Reading University (UK)

Optimally transported meshes on the Sphere for Global Atmospheric Modelling

Abstract

Numerical weather and climate predictions could be dramatically improved with the use of adaptive meshes - locally varying the spatial resolution to improve accuracy. R-adaptivity, (mesh re-distribution) is an attractive form of adaptivity since it does not involve altering the mesh connectivity, does not create load balancing problems on parallel computers, does not require mapping solutions between different meshes, does not lead to sudden changes in resolution and can be retro-fitted into existing models.

Optimal transport techniques can be used to create r-adapted meshes in Euclidean geometry which are guaranteed not to tangle by solving a Monge–Ampere equation. However these techniques do not apply directly on the surface of the sphere. I will describe the first numerical method for solving an equation of Monge–Ampere type on the surface of the sphere in order to generate tangle free r-adapted meshes on the sphere.

Joint work with Philip Browne, Chris Budd and Mike Cullen.

Notes

Motivation to compute user-specified resolution without introducing new mesh points

  • r-Adaptivity instead of h-adaptivity
  • No load balancing
  • $A_x m(x) =$ const where $A_x$ is mesh monitor (cell area) function and $m(x)$ is the map Then $x = \xi + \nabla \phi$
  • Monge-ampere equation for mesh potential $\phi$
    But on a sphere, Monge–Ampere potential doesn’t map. So use exponential maps instead Then $x = \xi + e^{\xi} \nabla \phi$
  • Use iterative mesh optimization with Lloyd’s algorithm
  • Demonstration of Monge–Ampere on a sphere for polygonal meshes; 3-D r-adaptivity is open topic
  • Excellent results and similar to some of my explorations with r-adaptivity and AMR with MA

4. Peter Lauritzen (NCAR)

Separating dynamics, physics and tracer transport grids in a global climate model

Abstract

In the context of NCAR’s global climate model CAM-SE (Community Atmosphere Model - Spectral-Elements) based on a cubed-sphere spectral-element dynamical core, we explore the consequences of separating dynamics, physics and tracer transport grids. To maintain important conservation properties, conservative spectral-element to finite-volume grid (and vice versa) interpolations methods had to be developed as well as consistent coupling between the finite-volume transport scheme and spectral-element dynamics. The model has the functionality to compute the sub-grid-scale parameterizations (physics) based on a dynamics state integrated over a coarser or finer resolution finite-volume grid. The consequences of separating grids, and thereby spatial scales, on the model climate will be discussed.

Notes

Tracer transport and conservation requirements

  • Coupled model development
    • Large resolution in spatiotemporal scales
    • Typically (25-33 tracers); whole atmosphere (60-135 tracers)
  • Resolvable scale - dynamical cores
    • Track tracers: Water vapor, aerosols
  • Physics module - boundary layer, drag, convection
  • Question: Dribbling ?? Is it same as subcycling
  • SE grid : GLL quadrature (tensor product 1-d nodal basis)
  • Need mass and energy conservation – essential
  • Lot of energy loss and artificial gains – currently use global fix ups for energy conservation
  • CSLAM - conservative (locally); couple SE and FV solvers for tracer transport
  • MPI performance: CSLAM reduces memory/tracer and scales all the way to 1 element/processor

Coupler and dynamics issues

  • non-uniform sampling of atmospheric state - based on GLL quadrature points in elements
  • ongoing discussion whether 2x2, 3x3, 4x4 grids are best for physics coupling (say 3x3 represents same grid in dycore model)
  • stationary grid scale forcing

5. Angelo Iollo, Université de Bordeaux and Inria

Numerical Modeling of Multi-Physics/Multi-Scale Phenomena on Cartesian Hierarchical Grids

Abstract

The study of complex multi-physics/multi-scale phenomena requires advanced numerical modeling. These problems are insoluble by traditional theoretical and experimental approaches, hazardous to study in the laboratory, or time-consuming and expensive to solve by classical means. Our objective is to simplify the numerical modeling of problems involving complex unsteady geometries and multi-scale physical phenomena. Rather than using extremely optimized but non-scalable schemes, we adopt robust alternatives that bypass the difficulties linked to unsteady grid generation, a prohibitive task when the boundaries are moving and the topology is complex and unsteady. Hierarchical Cartesian schemes allow the multi-scale solution of PDEs on non body-fitted meshes with a drastic reduction of the computational setup overhead. These methods are easily parallelizable and they are efficiently mapped to HPC architectures. Thanks to exemples relative to fluid-structure interaction, high-speed impacts, rarefied flows and material science, we plan to show how appropriate mathematical modeling, hierarchical Cartesian schemes and HPC can contribute to the simulation of new challenging complex phenomena in physics.

Joint work with M. Bergmann, F. Bernard, A. de Brauer, M. Cisternino, T. Milcent, A. Raeli, F. Tesser, and L. Weynans.

Notes

Hyperelastic solid deformation with cartesian grids

  • Use reference configuration to map to deformed configuration
  • Eulerian elasticity
  • Momentum, continuity and energy on deformed grid
  • For Octtree based representatons, use SFC for numbering
    • Use SFC for locally refined quadtree
    • Persistent label - Morton index
    • Comparison between FD and FV - about the same

6. Weiqing Ren, National University of Singapore and IHPC

Coupled Atomistic-Continuum Methods for the study of Complex Fluids and Micro-Fluidics

Abstract

In many areas of science and engineering, we face the problem that we are interested in analyzing the macro-scale behavior of a given system, but we do not have an explicit and accurate macroscopic model for the macro-scale quantities that we are interested in. On the other hand, we do have a microscopic model (e.g. molecular dynamics) with satisfactory accuracy — the difficulty being that solving the full microscopic model is far too inefficient.

In this talk, I present a seamless multi scale method, which captures the macro-scale behavior of a system with the help of a micro-scale model. In the seamless algorithm, the micro model supplies the data which is needed but missing in the macro model. The two models evolve simultaneously using their intrinsic time steps, and they exchange data at every time step. We illustrate the multiscale method using examples of complex fluids, in which the macroscopic conservation laws are coupled with molecular dynamics.

In the second part of the talk, I discuss a classical problem in fluid mechanics – the moving contact line problem. It is well-known that the classical Navier-Stokes equation with no-slip boundary condition predicts a non-physical singularity at the contact line with infinite rate of energy dissipation. In this talk, I show how continuum theory and molecular dynamics can be combined to give us a better understanding of the fundamental physics of the moving contact line and formulate simple and effective models. I also illustrate how this model can be used to analyze hysteresis and other important physical problems for the moving contact line.

Notes

Seamless multiscale methods (joint work with Weinan E.)

  • Focus is on dynamics of complex fluids and molecular dynamics
  • time scale of macro dynamics » time sclae of molecular dynamics
    • So at each macro step, MD step is fully relaxed and in equilibrium
    • The coupling between models is done sequentially (no iteration)

7. Colin Cotter, Imperial College, London

Compatible finite elements for numerical weather prediction

Abstract

I will describe our research on numerical methods for atmospheric dynamical cores based on compatible finite element methods. These methods extend the properties of the Arakawa C-grid to finite element methods by using compatible finite element spaces that respect the elementary identities of vector-calculus. These identities are crucial in demonstrating basic stability properties that are necessary to prevent the spurious numerical degradation of geophysical balances that would otherwise make numerical discretisations unusable for weather and climate prediction without the introduction of undesirable numerical dissipation. The extension to finite element methods allow these properties to be enjoyed on non-orthogonal grids, unstructured multiresolution grids, and with higher-order discretisations. In addition to these linear properties, for the shallow water equations, the compatible finite element structure can also be used to build numerical discretisations that respect conservation of energy, potential vorticity and enstrophy; I will survey these properties. We are currently developing a discretisation of the 3D compressible Euler equations based on this framework in the UK Dynamical Core project (nicknamed “Gung Ho”). The challenge is to design discretisation of the nonlinear operators that remain stable and accurate within the compatible finite element framework. I will survey our progress on this work to date and present some numerical results.

Notes

How to use different FEM discretisations for components and still maintain compatibility

  • FEM for weather prediction
    • Hard to build discretizations for weather prediction due to large separation of spatiotemporal scales
      • (acoustic, gravity and circulation/planetary wave scale separation)
      • Models never converged; noisy at gridscale
      • conservation important on long timescales
    • With F.D methods, loss of consistency common (Thuburn & CJC SISC 2012, Thuburn, CJC and Dubos GMD 2014)
    • Idea of the compatible discretisations is to create projections to go from say Q1 space to RT0 and then to DG0. Similar for triangles.
    • Publication: McRae, Andrew “Automated generation and symbolic manipulation of tensor product FE”
    • Challenge
      • Stable and accurate advection scheme in momentum and temperature spaces
      • nonlinear pressure gradient for compressible
      • bounded advection (limiters)
      • Use IMEX for tempoeral discretisations
    • Natale - compatible FE for geophysical fluid dynamics, 2016
    • Stabilization imposed with upwinding (DG) and with SUPG as needed
    • Use FireDrake for the software with FIAT, Fenics

8. Freddy Bouchet, ENS de Lyon and CNRS

Large deviation theory applied to climate physics, a new frontier of theoretical and mathematical physics

Abstract

The main aim of this talk is to introduce and review some of the recent developments in the theoretical and mathematical aspects of the non-equilibrium statistical mechanics of turbulent geophysical flows and climate dynamics. This field, at the intersection between statistical mechanics, turbulence, and climate applications is a wonderful new playground for theoretical and mathematical physicists. Path integrals, instanton theory, semiclassical approximations, large deviation theory, and diffusion Monte-Carlo algorithms are at the core of our approach. We will consider two classes of applications in climate dynamics for which, rare dynamical events may play a key role. A first class of problems are extreme events that have huge impacts, for instance extreme heat waves. We will apply large deviation algorithms to compute the probability of extreme heat waves. A second class of problems are rare trajectories that suddenly drive the complex dynamical system from one attractor to a completely different one, for instance abrupt climate changes. We will treat the example of the disappearance of one of Jupiter’s jets during the period 1939-1940, as a paradigmatic example of a drastic climate change related to internal variability. We will demonstrate that quasi geostrophic turbulent models show this kind of ultra rare transitions, where turbulent jets suddenly appear or disappear on times scales tens of thousands times larger than the typical dynamical time scale. Those transitions will be studied using large deviation theory and non-equilibrium statistical mechanics.

Notes

  • Examples of turbulence in Jupiter’s profile - neat video from NASA
  • Good samples of dynamics is important to capture rare event behavior

9. Marie Farge, ENS Paris

Production of dissipative vortices by solid boundaries in 2D flows: comparison between Prandtl, Navier-Stokes and Euler solutions

Abstract

Turbulent boundary layers are ubiquitous in geophysical fluid flows and we will study the Reynolds number dependence of the drag due to the interaction between topography and atmospheric flows, or between basins and oceanographic flows. For this we will revisit the problem posed by Euler in 1748, that lead d’Alembert to formulate his paradox and address the following problem: does energy dissipate when a boundary layer detaches from a solid boundary in the vanishing viscosity limit? To trigger detachment we consider a vortex dipole impinging onto a wall and we compare the numerical solutions of the Euler, Prandtl, and Navier-Stokes equations. We observe the formation of a boundary layer whose thickness scales as predicted by Prandtl?s 1904 theory. But after a certain time Prandtl’s solution becomes singular, while the Navier-Stokes solution collapses down to a much finer thickness. We then observe that the boundary layers rolls up into vortices which detach from the wall and dissipate a finite amount of energy, even in the vanishing viscosity limit, in accordance with Kato’s 1984 theorem.

Notes

Excellent historic developments survey - very good talk (engaging)
  • Book: Darrigol: World of flows a history of Bernoulli to Prandtl
  • Saint venant: suggested acconting for friction of fluid on bodies
  • Sreenivasan 1984, Phys. Fluids – strong turbulence has constant dissipation rate while weak turbulence has decaying energy dissipation rate
  • Inviscid limit = Euler equation + Prandtl viscous equation; boundary layer $\delta x \propto Re^{-0.5}$
  • Kato 1984: Seminar on NOnlinear PDE
    • The NS solution converges to Euler solution in $L_2$, if and only if it vanishes in a strip of width proportional to 1/Re
Multiscale properties of turbulence
  • Okamoto, Yoshimatsu, Schneider, 2007: homogeneous strong turbulence - parallel DNS code (on Japanese supercomputer) with upto $4092^3$ resolution showing dissipation rate remaining constant with $Re=1200^2$.
  • Coherent structures can be filtered out with say, orthogonal wavelets
  • Several validations done with DNS to show $Re^{-0.5}$ and $Re^{-1}$ and turbulence energy dissipation
  • Next compare Euler-Prandtl and NS: basically the former blows up but NS is stable and matches experiemental eviedence
  • Thing to remember: need adaptive refinement in streamline direction at boundary layer as $1/Re$ near the wall to capture turbulence effects
Open questions
  • Possibly a new asymptotic desciption beyond the breakdown of Prandtl regime
  • D’alembert’s paradox - interesting (read up)
  • Turbulence conference: publications, talks, videos and codes are available

10. Thomas Dubos, Ecole Polytechnique

Conservative adaptive wavelet simulation of geophysical flows

Abstract

Geophysical flows are characterized by a wide range of time and space scales, with the location of the smallest dynamically active scales changing incessantly. Thus, geophysical flows might be more efficiently simulated using a dynamically adaptive grid that track small scale features. The presentation will focus on a wavelet-based approach towards this goal, where the flexibility offered by the design of second-generation wavelets is exploited in order to ensure across-scale consistency of mass and vorticity budgets and to preserve discrete conservation properties. The latter are especially desirable in view of long climate simulations.

When and where to dynamically refine and coarsen the mesh is a key determinant of efficiency. Often, crude, heuristic criteria based on gradients and tunable thresholds are used. Recent results suggest that a more systematic approach based on local estimators of numerical truncation errors is adequate for idealized simulations, including challenging setups such as statistically homogeneous turbulence.

However, an adequate refinement strategy for realistic simulations remains to be defined. Indeed such simulations are expected to remain under-resolved, so that subgrid-scale models (of turbulence, convection, gravity waves, …) play a crucial role. Their interplay with adaptivity, and even their dependence on resolution, is a non-trivial issue.

Joint work with Nicholas Kevlahan and Matthias Aechtner

Notes

Scalar and Vector fields
  • Consistent interpolation operators to go from coarse to fine and vice-versa for both fluxes and scalars
  • Use wavelet theory to compute the interpolation and restriction operators
  • The wavelet coefficients = difference between actual values and values interpolated from the coarse grid
  • lazy restriction: injection; but need to have same mass between meshes
    • remedy with lifting: This involves modifying the nodes correctly to preserve area under a curve.
    • Publication: Schroeder and Sweldens 1995, Sweldens 1996
    • Do an inverse wavelet transform (local ops) – extends possibility to do nonlinear filtering
How to drive AMR ?
  • Boils down to whether to keep/drop wavelet coefficients above/below a certain threshold
  • Changing coefficients adds filtering error on top of discretisation error since it may not be optimal anymore
  • We can get estimates of discretization error (length scale assumptions) and thresholding error (lower+upper bound with a 3/2 exponents)

11. Björn Engquist, University of Texas at Austin

Distinguished Lecture: The Heterogeneous Multiscale Method

Abstract

The framework of the heterogeneous multiscale method (HMM) will be introduced. This is a methodology for coupling numerical simulations of different scales. A macroscale model gets microscale data from detailed computations on smaller subsets of the full domain. We will analyze HMM convergence properties for homogenization problems and present applications, for example, to epitaxial growth and flow in porous media.

Notes

Challenges in multiscale modeling
  • Quantum mechanics $\to$ MD $\to$ continuum $\to$ macro scale
  • Maxwell: atomistic to galactic domains
  • Typically, two approaches are used:
    1. coupling of different models (multi model); high fidelity in small domains
    2. full resolution for all scales that reduce flops/unkowns (say multigrid)
  • Periodic oscillatory function given by a scaling law: $u_s(x) = u(x,x/\epsilon)$
  • Macro-scale may fail if strongly local pheomena or sampling of micro-scale needed throughout the domain
  • Multiscale differential or integral equation: $F_{\epsilon}(u_{\epsilon})=0$
    • $\lim_{\epsilon \to 0} u_{\epsilon} = \bar{u}, \quad F(\bar{u})=0$
  • If scale is order of $\epsilon$ per dimension, we need in the order of $1/\epsilon$ to represent the scale; high accuracy means you take $\frac{C}{\epsilon}$, where $C>1$. So total cost is $O(\epsilon^d)$, where $d$ is the dimension of the problem.
  • flops = $O(N(\epsilon, \delta)/\epsilon)^{dr}$, r - exponent for number of flops/unknown; gauss elimination: r=3
  • You can take optimal clustered sampling for stability
  • In AMR, we are doing type B (local resolution) approach
Strategies
  • Start with a model, discretize it and if $\epsilon$ is not very large, in certain parts of the domain, then discretize homogenized or effective equation for computing average or expected values.
  • Analytical modeling
    • Singular perturbations (type A)
    • Homogeneous elliptic operators (type B)
    • Averaging of dynamic systems (type B)
  • Homogenization theory for elliptic equation
    • Split into periodic and average solution components
    • Solve homogenized problem and couple it back to local periodic (sub-scale) process
  • ODEs
    • Type A: Stiff systems: slow/fast and DAE – say IMEX
    • Oscillatory systems Type B
  • Traditional numerical multiscale: multigrid, FMM, DD, Wavelet compression, Krylov subspace, FFT – all of them reduce totoal flops
  • Example: FMM: p-p interaction $O(N^2)$; simplify interaction of domains that are far apart; say effect of moon on tides (homogenized modeling)
  • For type A:
    • AMR, shock capturing, effective BC in HMM
  • For type B:
    • HMM, superparameterization, DNS-LES coupling (turbulence)
  • General framework: HMM

12. Björn Engquist, University of Texas at Austin

Distinguished Lecture: Numerical Approximation of Oscillatory Dynamical Systems

Abstract

The numerical HMM for dynamical systems aims at approximating the analytically averaged equations of systems of ordinary differential or stochastic differential equations. One major difficulty is to extract slow components in the otherwise highly oscillatory solutions. Applications to astrophysics and molecular dynamics will be discussed.

Notes

The Hetereogeneous Multiscale Method (HMM) - a framework for designing numerical coupling models at different scales

  • Design macro-scale scheme for desired variables; Use micro-scale numerical simulations loally in time and spae to supply missing data
  • Schematic:
    1. Compression: macro$\to$micro: compute subscale physics
    2. Restriction forward and reverse projection (macro$\to$micro$\to$macro)
    3. Constraints: macro$\to$micro: BC specification
    4. Data estimation: reverse data migration (micro$\to$macro)
  • Ex: compressible flow coupled with MD to compute macro-scale fluxes
    • Basically replace Riemann problem for flux computation with MD simulation - now run simulation and measure flux across surfaces of intersest; simple.
    • Similar to homogenization
  • Issues
    • data estimation
    • reconstruction (from macro to micro); can we close the system consistently ?
    • BC in micro-scale simulations
    • Sequential vs concurrent ?
      • Explore parameter space, tabulate and evaluate later; typically called as parameter passing.
      • Or do online generation as required (HMM is always concurrent)
  • HMM for type B
    • Always be aware of scale separation i.e., if micro-scale spreads across domain (near field), may affect assumptions at macro-scale
    • Look for substantial scale separation as this is a requirement for accurate subscale coupling
  • Question: Top down or bottom up in terms of scale fidelity ?
    • Both are possible depending on type of physics coupling
    • Latter (bottom-up) is typically not practiced often since this implies we are doing micro-scale solution everywhere.
  • Elliptic homogenization
    • numerical cell problem to generate missing macro-scale stiffness matrix
    • impose consistency though flux across elment edges $\nabla u$
  • Stiff dynamical system
    • compute average at a finer scale and feed back to macro scale
  • Publication for homegenization theory: Abdulle & Schwab, E. Ming, Zhang
  • Remark: Sometimes flux estimation from micro-scale can provide improved accuracy to balance error buildup over long time integration
    • Can remove spurious oscillations that is otherwise seen with just macro-scale solves
    • Example problem: Block wave expansion (Santosa, Symes 91)
  • HMM is a flexible framework that supports both type A and B coupling between physics models
  • Recommended reading: Weinan, Engquist on survey of HMM

13. Qiang Du, Columbia University

Bridging scales through nonlocal modeling and simulations

Abstract

Nonlocal integral-differential equations and nonlocal balance laws have been proposed as effective continuum models in place of PDEs for a number of anomalous and singular processes. They may also be used to bridge multiscale models, since nonlocality is often a generic feature of model reduction. We discuss a few relevant modeling, computational and analysis issues, including robust simulation codes for validation and verification, effective gradient recovery in a nonlocal setting, and seamless coupling of local (PDEs) and nonlocal models. We demonstrate how new mathematical understanding of nonlocal models and asymptotically compatible discretization can help resolving these issues.

Notes

  • Phase field crystal theory
  • Nonlocal modeling via model reduction
    • Example: convert a volumentric treatment of local PDE (say poisson) to a boundary integral where coupling is nonlocal
    • To remain local, use closure approximations with original or extended state variables
  • In the discrete sense, doing a schur complement essentially produces nonlocal blocks (even if original structure was blocked)
  • Discretization can produce nonlocality, eg., SPH (Chen-du, Tadmor, 1993)
    • Filtering, add viscosity, regularizing solutions to eliminate singularity
    • Or keep singularity but use integration instead of differentiation $\to$ yields nonlocal models
  • Classical continuum mechanics - crack propagation (type A, Engquist)
  • Peridynamics: pose MD simulation as an partial integral operator
  • Nonlocal model for multiscale simulations

14. Björn Engquist, University of Texas at Austin

Distinguished Lecture: Numerical approximation of oscillatory dynamical systems

Abstract

It is computationally challenging to accurately represent the smallest scales over a domain that covers the largest scales in a multiscale problem. Classical casess are turbulence and the coupling of molecular and continuum formulations. We will see how information theory con be a guide to discretization strategies and then discuss a number of different techniques for numerical multiscale modeling.

Notes

Superparameterization and Epitaxial (crystal structure) growth

  • Multiscale dynamical systems: fast and slow modes depending on decay rates
  • Type A: local subscale process, Type B: globally oscillatory behavior
  • For type A transients, use stiffly stable temporal schemes (BDF, IRK)
  • For type B, split into smooth and osciallatory parts (molecular dynamics, astrophysics)
    • $u \to \hat{u}$ for invariant part and solve the oscillatory part explicitly
    • Exponential methods, magnus techniques
  • Neat: Kapitza pendulum - oscillatory system that is difficult to solve
  • Major challenge: finding fast and slow modes
    • Unknown a-priori - so how to do HMM ?
    • Compression is a challenge: reconstruction of high dimensional micro-variables from macro-variables is not unique
      • Easier if physics-guided: say for mass, momentum, energy, maxwellian distribution, interpolation in homogenization
    • Use consistent re-initialization
      1. Compute averages of relevant moments and use as constraints
      • Sort of predictor-corrector for ODEs
      1. Find numerical or analytically approximations of a complete set of slow variables
      • Use the null space of the principal $1/\epsilon$ part of the system jacobian
      • Amplitude of local oscillator
      • Phase-difference between oscillator
      • Interesting solutions possible to the hard, Fermi-Pasta-Ulam problem
      1. Decompose $F$ into fast (ergodic) and slow remaining parts
      • Sort of application of IMEX-compatible splitting in some sense (hmm or may be not ?)
      • FLAVORS (Flow averaging intergrators, Tao, Owhadi, Marsden)
        • Staggered or fractional step evolution
      1. Parareal - based iteration
      • DD in time coupled to iterative update of all initial values with C, a coarse integrator
      • Tsai - HMM with local simulations covering the sub-intervals fully

Remark (Vijay): Intuitively, HMM with its slow/fast mode splitting, is similar to multilevel/multigrid philosophy to tackle the coupling across models/grids respectively. The average slow/coarse scale process can be thought of as the coarse level in a multigrid solve to eliminate long range effects while converging on the fast/micro-scale process tackles the short range effects.

Vijay Mahadevan
Vijay Mahadevan
Computational Scientist

My research interests include high-order numerical methods development for modeling and simulating coupled multiphysics applications in nuclear engineering and climate science applications, through scalable HPC software implementations.