In order to talk about multigrid acceleration it is necessary to first talk about iterative solvers in general. The venerable GaussSeidel method is a point solver, meaning that the algebraic equation is solved for the value at one point independent of the solution at any other point. This is the base form of the iterative solver.
Other iterative techniques attempt to add some portion of a direct solver to the mix  line solvers are a popular example of this. In a line solver, an entire "line" of points are solved coupled, points off the line are not coupled. Line solvers are powerful, due to the point coupling, but are difficult to implement on unstructured grids.
The strength of the GaussSeidel method is that it is trivial to implement for all grid styles. However, because the method is completely explicit, that is, it does not couple the solution of the points to each other, it has only a very local effect on the solution.
The small footprint of the GaussSeidel solver allows it to quickly reduce local errors in the solution, but causes it to perform poorly on global (or low frequency) errors in the solution.
As an example we will use GaussSeidel to solve the Laplace elliptic equation in 1D. The Laplace equation drives interior points to a smooth gradient based on the boundary values. If we set the boundary values to zero, and impose two different initial conditions, one low frequency and one high frequency, on the interior, we can see how the GaussSeidel solver reduces the error.
Low Frequency Error (1 sine wave).
High Frequency Error (4 sine waves).
For this example I have assumed the problem is "solved" when the RMS residual drops below 1.0E3. The high frequency error problem was solved in 30 work units (1 work unit is defined as one GaussSeidel sweep over the domain)  not too bad. However, the low frequency error problem required 433 work units  more than 14 times the work required to solve the high frequency problem.
This simple example is relevant to complex CFD solutions because the NavierStokes equations are usually arranged for solving such that an elliptic pressure equation is formed. Furthermore, subsonic pressure fields are almost always smooth leading to error which, you guessed it, is low frequency. A physicallybased way to think about it is that it takes a GaussSeidel solver a very long time to propagate a pressure specified outlet boundary condition all the way across the grid to a velocity specified inlet condition.
Enter the multigrid method which takes its name from the fact that it uses levels of grid coarsening to speed the solution. If we look back at the low frequency error image above, and envision removing all but every eighth grid point, the low frequency error is transformed into a high frequency error as illustrated in the image below.
Low Frequency Error (blue) on the Fine Grid Converted to High Frequency Error (green) on the Coarse Grid.
As we saw previously, the GaussSeidel solver can quickly reduce high frequency error. The interesting thing to note is that as we solve the algebraic equations on the coarse grid and then impose the new values back on the fine grid we introduce high frequency error into the fine grid as seen in the image below.
Effect of a coarse grid update on the fine grid.
The above image was created by coarsening the fine grid by a factor of 2 and 4 and then reversing the process. One GaussSeidel sweep was performed between each step. As the image shows, the multigrid method transforms the low frequency error into a high frequency error at all grid levels. This results in a tremendous speed up for the solver because the GaussSeidel method works so well on high frequency error.
To evaluate the work performed using the multigrid method we need to realize that a grid coarsened by a factor of 4 requires onefourth the amount of work to solve. In this case, the multigrid method required 144.6 work units to solve the problem  a speed up of nearly 300%.
Another huge benefit of the multigrid method is that solution time is of order Nlog(N) where N is the number of points. GaussSeidel solution time, on the other hand, is of order N^{2}.
As an example, the above problem with 100 and 200 points required 5080 and 20534 work units, respectively, to solve with GaussSeidel and only 1403.6 and 2916.1 work units, respectively, to solve using the multigrid method.
There are a number of additional issues which effect the performance of multigrid acceleration when applied to CFD. These include the manner in which the grid is coarsened and the amount of coarsening work done per iteration, both of which are large issues and are implementation specific.
Some solvers utilize a simple geometric coarsening like we used in our example. Geometric coarsening is the simple agglomeration of two neighboring points regardless of any other consideration. Geometric coarsening is easy to perform on a structured grid, but it is more difficult to implement on an unstructured grid. Furthermore, an even larger solver speedup can be attained by coarsening the grid in the "right" direction, that is, the direction of the low frequency error. A number of solvers have proprietary schemes for coarsening the grid which use a variety of factors for determining the coarsening direction, such as the linear coefficients and the flow field values. This process is termed "adaptive" multigrid.
In this example, I used a simple "V" coarsening scheme, named for the fact that the grid is coarsened to the lowest level once per iteration. That is, I coarsened the grid by one level, performed a GaussSeidel sweep, coarsened the coarse grid, swept again, and so on to the lowest level. Then I imposed the coarsest grid values on the next level up, performed a GaussSeidel sweep, and so on until I was back on the fine grid. The number of GaussSeidel sweeps done at each step can be varied infinitum and is a tradeoff between solver work and error reduction. Some implementations use a "W" scheme which coarsens the grid to its lowest level twice during each iteration.
An important point to consider is that because the NavierStokes equations are nonlinear, it is necessary to linearize the equations many times over to get a CFD solution.
Therefore, the best methods don't spend too much time solving the algebraic equations for a single iteration. Considering that up to 50% of a typical CFD solution is spent linearizing the equations as opposed to solving the resulting algebraic equations, optimizing the amount of work done per iteration is a very important component of an efficient CFD solver.
You might be wondering how the algebraic equations are formed on the coarse grid so that we could solve using GaussSeidel and impose those values back on the fine grid. Performing a linearization of the NavierStokes equations on each coarse grid level would require a lot of effort and be expensive from a memory standpoint. Instead, we can take advantage of the fact that we have already linearized the equations on the fine grid and realize that the coarse grid is just an agglomeration of the fine grid elements. Using this we can calculate the coarse grid linear coefficients from the fine grid coefficients very quickly.
It should be noted that there is some research being done where the NavierStokes equations are linearized on each coarse grid level. Aside from the substantial time and memory penalty, the effect is to speed up the nonlinear CFD solution. This has proven useful for certain highly nonlinear CFD solutions which would otherwise be too time consuming to perform.
In this article I have described the basic multigrid acceleration method. We have seen that the multigrid method converts low frequency error to high frequency error on a coarse grid. Because GaussSeidel solvers are efficient at reducing high frequency error, the overall convergence rate is greatly increased. It is due to the high performance of the multigrid method that we are able to perform large realistic engineering simulations today.
Nick Wyman is a CFD professional with more than 7 years of experience in commercial CFD application. He currently works as a software developer and
engineering analyst for
Viable Computing
