# deal.II.scale - deal.II implementation for multiscale PDE

16 Feb 2021As supplementary material of this paper and this licentiate thesis, I developed a finite element implementation in deal.II. The code (together with installation instructions) is on GitHub. This post contains a global overview of the code and its features. A more practical post on usage and implementation details is forthcoming.

This project (finite element analysis for multiscale PDE) is a collaborative effort involving Adrian Muntean, Omar Lakkis and Chandrasekhar Venkataraman, Martin Lind and me. The code itself, for better or worse, is my responsibility.

## General introduction

This implementation serves as a framework to solve multiscale partial differential equations (PDE) in C++ using the deal.II library.
Multiscale, in this case, means two-scale, spatially scale separated systems. We identify these two scales as the *macroscopic* scale (indicated with \(\Omega\)) and the *microscopic scale* (indicated with \(Y\)).

Macroscopic and microscopic scales have no size relation. We identify a microscopic domain for each point in the macroscopic domain. Here’s a rudimentary illustration of this relationship:

This being a finite element framework, we discretize the macroscopic domain, and in doing so, limit the number of microscopic domains. In this setting, we let the implementation rely on the macroscopic degrees of freedom. Assuming that we approximate the macroscopic functions with local hat functions of some order, we instantiate microscopic domains for each support point of the basis functions. As a result, we have a microscopic domain (and therefore a microscopic system of equations) for each degree of freedom of the macroscopic finite element formulation.

This allows us to discretize the microscopic systems without any further considerations. It would be convenient to use the same discretization along all the microscopic systems, but this is of course only possible if they share the same geometry. Since this is rarely the case in real life, one of the main challenges this code addresses is mapping different microscopic structures to the same domain.

## What kind of equations does this implementation solve?

Markdown (the markup language this post is written in) lends itself rather poorly for typesetting equations, so if you are interested in the class of PDE systems the implementation solves, I would refer you to the papers in the top line. Basically, the code solves elliptic-elliptic systems with bulk- and boundary-based source and sink terms, and arbitrary boundary conditions. The code allows for non-linear couplings between macroscopic and microscopic systems in bulk and boundary terms by applying a Picard iteration scheme.

## What features does the code have?

### Convergence benchmarking

The code solves three different classes of systems, each described in the README.md of the repository.
The data for the PDE systems is supplied with a parameter file in the deal.II format.
The code supports convergence benchmarks using the *method of manufactured solutions*; you prescribe a desired solution for your PDE system, derive the data of the system that will necessarily lead to that solution, and then solve the PDE with your implementation. Comparing your numerical solution with the prescribed one for subsequent smaller meshes then provides you with the indication whether your finite element system is implemented correctly.

### Automated system data manufacturing

Since it can be quite tedious and time-consuming for larger systems to derive a system that corresponds to the prescribed solution, I included a script that automates the process. Using the symbolic algebra library SymPy, this script provides an interfaces that accepts prescribed solutions and performs to do the symbolic derivation to obtain the corresponding data. It then writes this data to a deal.II.scale compliant parameter file.

### Parallel execution

The code employs a *multithreaded* setup for assembling and solving the macroscopic and microscopic linear systems. It builds on top of the built-in single-scale parallelisation tools of deal.II, which in turn are built on top of the Threading Building Blocks library.
Assembly is done in a cell-based manner; For each cell in the mesh, we loop over all microscopic structures and find the local additions to the system (stiffness) matrix and right hand side (load) vector.
For a discussion on the (dis)advantages of this approach, I refer once again to the paper.
Solving the linear systems is also done in parallel, where we take advantage of the fact that we have many linear systems that can be solved independently.

### Multiscale visualisation

deal.II has a number of visualisation output formats built in. We rely mostly on the VTK format, which we can view and manipulate in ParaView rather easily. If you are trying to plot the solutions to the microscopic systems, you quickly notice that you have an abundance of them. This makes it a bit hard to visualise them in an orderly manner. To resolve this, I included a script that transforms and translates the microscopic solutions. The result are visualisations like the one below:

Not that these representations are not perfect: as I mention in the beginning of this post, there is no size relation between the macroscopic and microscopic systems, which means that the notion of *distance* between microscopic points is a bit ill-defined. But these plots can be considered as a collection of *zoom-ins* on the macroscopic domain.
Either way, this helps to get everyone/everything on the same page.

## Conclusion

I hope to have some time to describe how to use the framework later on. Although I doubt that others will require the exact same finite element formulation, perhaps parts of it can be reused. The code is documented almost everywhere, although I won’t guarantee that the design and implementation choices are optimal everywhere, both from a C++ and a deal.II point of view. From that perspective, any feedback/hints, but also questions/discussions are very welcome!