Dealing with Convergence Issues
Stefano Tronci
In the Home Studio - Part 1 episode we faced convergence issues when dealing with the highest driving frequency for our room ($400$ $\text{Hz}$). This meant that we could not quite trust the solution, and so we discarded it. In this episode we will look at what to do in this cases, and how to reach convergence. Rather that dealing with the issues in abstract and general terms (which would require writing an entire book about it) we will use the Home Studio - Part 1 episode to introduce the problem and figure out how to deal with it practically.
Project Files
All the files used for this episode are available at the repositories below:
The Problem of Convergence
In the Home Studio - Part 1 episode we setup our study as a single physics problem governed by a single PDE. As it was discussed in the Elmer Model and Solver Parameters episode we are then faced with two convergence issues:
- The convergence of the FEM solution to the real PDE solution.
- The convergence of the iterative linear solver algorithm.
Issue 1 is mainly dealt with by the mesh size and order. The higher the order and/or the smaller the size, the higher the agreement of the FEM solution to the real governing PDE solution. The size of the mesh does affect accuracy, and we made use of this rule to set it:
$$s<\frac{\lambda}{10}=\frac{c_{0}}{10f}$$
By using this rule, as we verified in The Pulsating Sphere episode, accuracy of our selected solver will be very high in most conditions. Be aware, though, that certain geometries might need additional local refinements. Perhaps, we will see when we need to go beyond this rule in a future episode.
The only remaining issue is then issue 2. Issue 2 stems from the fact that we selected an iterative algorithm to solve for the matrix equations that FEM generates and attempts to solve. This because we have many nodes, and direct solution is not feasible. Iterative algorithms will attempt solving our system in many subsequent steps, stopping when the solution computed after a step has not changed much with respect the one of the previous step. To check for this, a tolerance is set in Elmer. Let’s delve deeper into the problem.
The Linear System Convergence
Let’s first recall our Elmer linear system settings. The quickest way is by inspecting the sif file (Solver Input File) that ElmerGUI
generated for us. We can read all the Solver parameters here:
Solver 1
Equation = Helmholtz Equation
Procedure = "HelmholtzSolve" "HelmholtzSolver"
Variable = -dofs 2 Pressure Wave
Exec Solver = Always
Stabilize = True
Bubbles = False
Lumped Mass Matrix = False
Optimize Bandwidth = True
Steady State Convergence Tolerance = 1.0e-5
Nonlinear System Convergence Tolerance = 1.0e-7
Nonlinear System Max Iterations = 1
Nonlinear System Newton After Iterations = 3
Nonlinear System Newton After Tolerance = 1.0e-3
Nonlinear System Relaxation Factor = 1
Linear System Solver = Iterative
Linear System Iterative Method = BiCGStabl
Linear System Max Iterations = 2000
Linear System Convergence Tolerance = 1.0e-10
BiCGstabl polynomial degree = 2
Linear System Preconditioning = ILUT
Linear System ILUT Tolerance = 1.0e-3
Linear System Abort Not Converged = True
Linear System Residual Output = 10
Linear System Precondition Recompute = 1
End
All the Nonlinear
stuff does not apply to us, as the problem is linear (which is why we set Nonlinear System Max Iterations = 1
).
We can see that we chosen a BiCGStabl
iterative method with ILUT
preconditioning. Our linear system convergence tolerance was $10^{-10}$. We will go back to these parameters but for the time being let’s visualise how our solver fared at the various different frequencies of our analysis.
Click Here for Larger Version
Note that, due to Elmer logging the value of the convergence norm every $10$ iterations, the final iteration values across and below the convergence tolerance might be missing from the plot. The only solution that failed to converge was the one at $400$ $\text{Hz}$.
From the plot above we can see that convergence is very easy to reach for low frequencies. However, already at $100$ $\text{Hz}$, we reach a sharp transition. While for frequencies lower than $100$ $\text{Hz}$ we needed very few iterations to reach convergence (less than $30$) already at $125$ $\text{Hz}$ we need more than $100$. Interestingly the convergence is very slow up until a certain number of iterations, after which convergence becomes much faster. Our problem is that the higher the frequency the higher the number of iterations we need to reach good convergence. Clearly, $2000$ maximum iterations were not enough for the $400$ $\text{Hz}$ linear system iterator to settle on a stable solution field.
Also note that convergence is not smooth. How smooth the convergence is depends on the selection of iterative algorithm and preconditioner. Non-smooth convergence is normally not a problem.
In general, you should expect that the higher the frequency the harder to reach convergence. Why does that happen, and what can we do to get convergence at high frequency?
Frequency and Convergence
Frequency by itself does not cause convergence issues. Rather, the problem is that normally high frequency is associated with a more complex field solution. A more complex field is what makes convergence harder to achieve. Let’s go through this step by step.
Problem Size and Wavelength
What constitutes high frequency for a system might not be high frequency for another. Modelling small cavities (for example, an ear canal) up to a few $\text{kHz}$ does not offer very significant convergence challenges, just like modelling our room up to a few hundred $\text{Hz}$ didn’t. The difference between these two systems is size. In an ear canal, we need to reach a high frequency before we pack enough wavelengths along the canal length, while this happens at a much lower frequency in a bigger system.
The equation for the Acoustic Modes of a Rectangular Room can help visualising this. Let’s first recall it.
$$S_{n_{x},n_{y},n_{z}}\left(x,y,z\right)=\cos\left(\frac{n_{x}\pi}{L_{x}}x\right)\cos\left(\frac{n_{y}\pi}{L_{y}}y\right)\cos\left(\frac{n_{z}\pi}{L_{z}}z\right)$$
To which we have the associated modal frequencies:
$$f_{n_{x},n_{y},n_{z}}=\frac{c}{2}\sqrt{\left(\frac{n_{x}}{L_{x}}\right)^{2}+\left(\frac{n_{y}}{L_{y}}\right)^{2}+\left(\frac{n_{z}}{L_{z}}\right)^{2}}$$
Let’s now make things simple and pretend we live in a 1D universe (or, equivalently, let’s just set $y=z=0$ and concern ourselves only on what happens along $x$). Our equations become:
$$S_{n}\left(x\right)=\cos\left(\frac{n\pi}{L}x\right)$$
$$f_{n}=\frac{cn}{2L}$$
Where now we have only one dimension $x$, so only one “room” size $L$ and we count the modes with one integer $n$. Let’s also calculate the modal wavelengths:
$$\lambda_{n}=\frac{c}{f_{n}}=\frac{2L}{n}$$
Now we can clearly see that the higher $n$ the higher $f_{n}$ and the smaller $\lambda_{n}$. Let’s see what this means for $S_{n}\left(x\right)$:
Click Here for Larger Version
The plot above represents two modal shapes, one for $n=1$ and one for $n=10$, where $L$ was $4.35$ $\text{m}$ as per the Home Studio models. We have:
$$f_{1}=39.43 \space \text{Hz}, \space f_{10}=394.25 \space \text{Hz}$$
$$\lambda_{1}=8.7 \space \text{m}, \space \lambda_{10}=0.87 \space \text{m}$$
As clear from the equations above, the modal wavelength is a submultiple of twice the system length (for this monodimensional system, for multidimensional systems we extend this considerations to each dimension). It should then not come as a surprise that our modal shape $S_{n}$ has as many spatial periods (wavelengths) as half the mode number $n$ throughout the length $L$. Or, in other words, as evident from the equations and plots above:
$$\frac{L}{\lambda_{n}}=\frac{n}{2}$$
That is, the length $L$ contains $\frac{n}{2}$ lengths $\lambda_{n}$. The amount of wavelengths contained in the length $L$ is the source of the convergence issues, and the higher $n$ (and hence the frequency) the higher the amount of wavelengths contained in the length $L$. Note how we can reduce the value of the ratio $\frac{L}{\lambda_{n}}$ by reducing $L$. This is why for smaller system we incur in convergence problems for higher frequencies. Since $\frac{L}{\lambda_{n}}$ is the important quantity, let’s give it a name:
$$\xi_{n} = \frac{L}{\lambda_{n}}=\frac{n}{2}$$
But why exactly a higher value of $\xi_{n}$ causes more convergence issues? We can visualise it with a simple experiment.
Wavelength and Error
The iterative algorithms for the solution of linear systems keep on iterating until the solution they compute has stopped varying according to a certain criterion. So, let’s think of a varying solution for an eigenvalue problem. Maybe, after a certain iteration, the iterative algorithm supplied this solution instead of the correct linear system solution:
$$S_{n}\left(x\right)=A_{i}\cos\left(\frac{n\pi}{L}x + \phi_{i}\right)$$
Where $A_{i}$ and $\phi_{i}$ are small errors on the solution coming from the iterations which we count with the integer $i$. Note that this is not an accurate explanation of the sources of errors that affect numerical algorithms at the base of FEM. This is a vast topic for which it is possible to write entire books about. Here we are just introducing some error as a way of conceptually model what the impact of having a varying solution, iteration after iteration, is and how the ratio $\xi_{n}$ affects this. Clearly, errors in real FEM solutions are more complicated than this. Still, this is a good way to conceptualise what errors do at different values of $\xi_{n}$.
Let’s take our solutions above for $n=1$ and $n=10$ and let’s “distort” both of them with matched values of $A_{i}$ and $\phi_{i}$:
Click Here for Larger Version
In the plot above, we distorted the previous plot curves with a small random Gaussian amplitude error $A_{i}$ ($10^{-2}$ standard deviation) and a phase error $\phi_{i}$ that, in both cases, is equivalent to circularly shifting the waves on the plot by $3$ $\text{cm}$, plus a random Gaussian error of $1$ $\text{cm}$ of standard deviation. You can think of the phase error as an error in the location of the extrema of the waves.
By inspecting the plots it is evident that the blue lines (for $n=1$) are much more similar to each other than the orange lines (for $n=10$). This tells us that the very same errors end up producing vastly different results depending on the wavelength.
We can take the very same errors we used above and apply them to the first $50$ modal shapes and plot the error norm as a function of $\xi_{n}$. By error norm we mean:
$$E_{n} = \sqrt[+]{\int_{0}^{L} \left|S_{n}\left(x\right) - S_{n}\left(x; \space A_{i},\phi_{i}\right)\right|^{2} dx}$$
Click Here for Larger Version
In the plot above we removed the variable $n$ and plotted the quantities as continuous. This is a way to generalise this analysis outside the scope of modal shapes and eigenproblems.
We can see that, even though we applied the same error to all waves, the mere fact of having a higher $\xi$ ratio ends up producing higher errors in the final result. Hence: the more wavelengths per characteristic system length, the higher the chance for significant errors in numerical algorithms.
Frequency and Convergence: Conclusion
It is important to underline again that the simple analysis we did above is not representative of real errors stemming from numerical iterative solvers. In particular, FEM will not in general make errors that can be described as simple overall amplitude and phase shifts. This simple analysis shows us two important things, though:
- The higher the ratio $\xi=\frac{L}{\lambda}$ the more complex the field will be. This because you should expect more field features (like extrema) to be packed in the same characteristic length $L$.
- The more complex the field the higher the chance for the error norm to sum up to large values, even if the original quantities are affected by small errors. You can think of it in terms of probability: it is much easier to get a high number of field features wrong if there are many features.
By characteristic length we mean any length that is important for your system. For example, if you were modelling a tube, you might be interested in its length. There might be more characteristic lengths. In a room all lengths are normally of the same order of magnitude: they are sort of equally important.
With reference to our Home Studio model, we seen a transition in convergence behaviour at around $100$ $\text{Hz}$. Along the various directions, this amount to these values of $\xi$:
$$\xi_{x} = \frac{L_{x}}{\lambda} \approx 1.47$$ $$\xi_{y} = \frac{L_{y}}{\lambda} \approx 0.81$$ $$\xi_{z} = \frac{L_{z}}{\lambda} \approx 0.66$$
This suggest that we should expect to start encountering convergence issues as $\xi$ reaches values between $0.5$ and $1.5$. From here you can figure out up to which frequency you should have fast and easy convergence for your particular problem.
But be careful: these thresholds will be dependent upon mesh size, order, linear solver algorithm and preconditioning. Take them just as a rule of thumb.
As a final remark, the way we modelled error might wrongly make you believe that at a certain frequency things might go back in check, as it would happen with a comb filter, whose behaviour is periodic (in fact, that error formulation is a sort of spatial comb filter). That’s not the case: real numerical solutions don’t do that, as we discussed above. In general, expect convergence to always get more problematic the higher $\xi$.
Dealing with it
Now that we have some intuition about why we get harder convergence for higher frequencies, how do we make convergence to happen? Can we get that $400$ $\text{Hz}$ solution?
The answer is yes, we can. The bad new is that normally the way to do it is problem dependent, and pretty hard to guess a priori. What follows is a list of recommended steps to improve convergence.
Review your Mesh
As we underlined in the FEM in a Nutshell episode, the meshing is the most fundamental step in FEM. Makes sure your mesh is conformal and as simple as possible. Also, it is worth to reduce the order, as accuracy is mainly controlled by size (although higher order does improve it).
If you solved the studies associated with the Home Studio - Part 1 episode you might have seen errors like these in the Elmer log window:
ERROR:: ElementMetric: Degenerate 3D element: -1
ElementMetric: Body Id: 0 DetG: 0.000E+00
ElementMetric: Node: 1 Coord: 0.000E+00 0.000E+00 0.000E+00
ElementMetric: Node: 2 Coord: 1.000E+00 0.000E+00 0.000E+00
ElementMetric: Node: 3 Coord: 0.000E+00 1.000E+00 0.000E+00
ElementMetric: Node: 4 Coord: 0.000E+00 0.000E+00 1.000E+00
ElementMetric: Node: 5 Coord: 5.000E-01 0.000E+00 0.000E+00
ElementMetric: Node: 6 Coord: 5.000E-01 5.000E-01 0.000E+00
ElementMetric: Node: 7 Coord: 0.000E+00 5.000E-01 0.000E+00
ElementMetric: Node: 8 Coord: 0.000E+00 0.000E+00 5.000E-01
ElementMetric: Node: 9 Coord: 5.000E-01 0.000E+00 5.000E-01
ElementMetric: Node: 10 Coord: 0.000E+00 5.000E-01 5.000E-01
ElementMetric: Node: 2 dCoord: 1.000E+00 0.000E+00 0.000E+00
ElementMetric: Node: 3 dCoord: 0.000E+00 1.000E+00 0.000E+00
ElementMetric: Node: 4 dCoord: 0.000E+00 0.000E+00 1.000E+00
ElementMetric: Node: 5 dCoord: 5.000E-01 0.000E+00 0.000E+00
ElementMetric: Node: 6 dCoord: 5.000E-01 5.000E-01 0.000E+00
ElementMetric: Node: 7 dCoord: 0.000E+00 5.000E-01 0.000E+00
ElementMetric: Node: 8 dCoord: 0.000E+00 0.000E+00 5.000E-01
ElementMetric: Node: 9 dCoord: 5.000E-01 0.000E+00 5.000E-01
ElementMetric: Node: 10 dCoord: 0.000E+00 5.000E-01 5.000E-01
Whilst we still have a solution in this case, it is best practice to try to prevent errors and warnings as much as possible. These can result in the FEM matrices having some very small or big values, which in turn can hurt convergence (as well as accuracy).
In the Home Studio - Part 1 episode we discussed the method to create conformal meshes with more than one body, mainly to introduce the topic. Most likely, those errors come from the resulting multibody mesh. However, we do not actually need more than one body here, so the simplest thing is to remove the second body (the sphere source) and replace it with a cutout (as done in The Pulsating Sphere and the Rigid Walled Room episodes).
Then, we kept the mesh size unaltered but reduce the order to $1$ (linear elements).
You can review the files in the repository to see these changes. Refer to the linked episodes for detail on how to create geometries and meshes as described.
Review your Iterative Solver and Preconditioning
The second step is to review the linear solver settings in Elmer (see the Elmer Model and Solver Parameters episode). A good way to start is by selecting the following parameters:
Parameter | Value |
---|---|
Linear System Iterative Method | BiCGStabl |
Linear System Max Iteration | $10000$ |
Linear System Convergence Tolerance | $10^{-10}$ |
Linear System Preconditioning | ILU0 |
You can then attempt solving, watch convergence in the logs, and take note of the performance (or, even better, save the Elmer logs and compare the various convergence histories in plots). If the convergence norms does not seem to improve below a certain value you can abort and try another preconditioner. The more complex the preconditioner the more time it will take to compute a single iteration. The parameters in the table above are also shown in the picture below:

Figure 1
Good Initial Linear Solver Parameters
The parameters above ended up working nicely for the $400$ $\text{Hz}$ simulation of the Home Studio model.
If you find hard to reach convergence after many different preconditioner trials, you might need to search for a better iterative method. You should try BiCGStab
first and then the others. Switching to another iterative method means that you will start over again putting preconditioners under trial. Needless to say, this can keep you busy for a while.
Pro Tip: Parallelisation
In order to save time you might want to speed up the time to do those linear system iterations. Elmer makes it very convenient to parallelise solution by using all of your machine cores. Do the following:
- From the menu bar in ElmerGUI, select Run then Parallel Settings.
- Tick the Use parallel solver tickbox.
- Input your number of cores in the Number of processes: editbox.
The Parallel settings window should look like this:

Figure 2
Elmer Parallel Settings
You most likely never need to touch the other settings in the window.
Elmer will slice the domain in as many parts as you have cores, and process the domains independently. At the end you will find many vtu files, one for each domain partition (or, if you have a transient or scanning simulation, one series of vtu files for each partition). Additionally, you will have a pvtu file. You can load this pvtu file in ParaView: it will load the entire solution and you will be able to process it as any other solution (note that the boundaries between partitions will probably be visible depending on the postprocessing you perform).
What if I cannot get Low Convergence Norm Values?
It could happen that, no matter what you do, your convergence norm stays somewhat higher, hitting a plateau maybe at $10^{-7}$ or something like that. In this case you could perhaps call it a day and set your tolerance to $10^{-7}$. In fact, for certain solvers the more iterations pass the more the solution might become corrupted by feeding back small numerical errors over and over.
Accuracy should not be negatively affected by setting an higher convergence tolerance, as accuracy is mainly controlled by mesh quality. So, identify where the convergence plateau for your problem is and set the threshold there. If you are afraid your solution might not be accurate, the best thing to do is verify the system with one or more simple benchmark problems (for example a pulsating sphere or a pipe). If the FEM solution, at a high enough $\xi$ for the benchmark problem(s), agrees well with the analytical solution than you are good.
If the convergence norm plateau is very high (bigger than $10^{-5}$) you could perhaps submit a question on the Elmer Forums.
The Final Solution
By using the ideas outlined above, it was possible to get the Home Studio $400$ $\text{Hz}$ problem to converge to a final solution. The results are briefly shown here.
Click Here for Larger Version
The convergence plot for the converging 400 Hz simulation is shown above (orange line) and it is compared to that of the original study (blue line), which is also reported above.
It is possible to see that the refactored study was able to converge after $\sim6600$ iterations. The original study seemed to have a faster convergence rate. However, the refactored study made use of the ILU0
preconditioning plus parallelisation, so it was much faster to solve. The original study used ILUT
, which is more computationally expensive, so it is expected to produce a solution in a longer time even when using parallelisation. The refactored study needed a couple of minutes to be computed on a machine using an i7 and $32$ $\text{GB}$ of RAM.
Finally, let’s take a look at the resulting field.

Figure 3
Pressure Field at $400$ $\text{Hz}$
The plot above shows real (left) and imaginary (right) parts of the solution field, depicted as surfaces of constant pressure. Clearly, we got ourselves a very complex field, as expected by the high value of frequency, with many features. This is indeed the reason why convergence was tricky to get.
Conclusion
In this episode we discovered the following:
- Acoustic fields become more complex the shorter the wavelength gets in relation to the size of the domain.
- More complex field will yield slower convergence.
- It is possible to rework a FEM study so to help convergence as much as possible.
- Faster convergence does not always mean faster execution times. This because a the total time is governed by the time required to complete one single iteration.
- It is possible to speed up execution through parallelisation.
- Accuracy is mainly controlled by mesh properties, not the linear system convergence tolerance.
In this episode we achieved easier convergence mainly through:
- Removing unnecessary domain features.
- Using a first order mesh.
- Allowing for more linear system iterations.
With the handy help of parallelisation that kept execution fast.
In the next episode we will go try to understand what we are sacrificing, in terms of accuracy, when choosing a first order mesh. To do so, we will solve The Pulsating Sphere study again, but with first order meshes, and compare the results with those from the original study, which used second order meshes.
License Information

This work is licensed under a Creative Commons Attribution 4.0 International License.