Ear Canal - Part 1
Stefano Tronci
It has been quite some time since I contributed a new tutorial to this website. In addition to the general lack of time, the main reason is that I was busy studying FEniCSx. My intention was to create a piece of software to solve the Helmholtz equation in heterogeneous media, with support for adiabatic absorbers boundary conditions. Whilst I was able to draft something up (see the acoupy_helmholtz
fenicsx
branch) I did not make it very far into workable code yet. This also because most of my time has been spent dealing with the Helmholtz equation itself, and the modifications required to support such simulations. Eventually all of this work will find its way into various articles in this website. But for the time being I decided to stop for a second. There are many more Elmer solvers that I have not explored yet. I decided that, while I carry on this “research” work, I can explore these solvers and write a few new episodes while doing so. This article is about a first model of the ear canal. The model is not particularly sophisticated, as it uses Elmer’s HelmholtzSolve
solver that we have learned and love. However, it deals with a realistic geometry. This allows me to show how to manipulate complicated geometries, produce initial simple results, assess whether the results are reasonable, and lay a foundation for more complicated simulations. So, let’s dive in!
Project Files
All the files used for this project are available at the repositories below:
Simulating an Ear Canal
Simulating an ear canal is easier said than done. The ear canal might seem a very simple system: just a tube. However, there are a few things that make it deceptively complicated to handle. These are:
- Complicated geometry. Whilst the ear canal geometry is not particularly complicated, it is still an an anatomical structure. Modelling the ear canal as a straight tube can provide good insights, but generally limited to the most basic attributes of its response (for example, resonance frequency).
- Complicated (and complicated, as in complicated number) termination impedance. The ear canal is not rigidly terminated. Instead, it is terminated with the tympanic membrane. This is, in turn, connected to the ossicles which connect to the oval window on the cochlea. Not only this means that the acoustic impedance is that of a membrane, but one that is connected to a system, the cochlea, which reacts to sound nonlinearly, causing in turn complicated behaviour of the tympanic membrane.
- Thermoviscous losses. The ear canal is a fairly small cavity, especially compared to the wavelengths of audible range sound. This gives rise to microacoustics effects such as thermoviscous losses. These effects are significant, so much so that they need to be captured by ear simulating microphones (for example, see COMSOL’s Generic 711 Coupler - An Occluded Ear-Canal Simulator).
To start off, we will only deal with point 1 and 2 for this episode.
It is perhaps a good moment to mention that I am not fully satisfied with the results I obtained in this simulation. But this is good: this will allow me to explain how to critique your own simulations. Interestingly, though, while writing the article I found that COMSOL has attempted this in a similar way, see Type 4.3 Ear Simulator. This gives me some hope that the approach that I am following is reasonable.
Setting up the Simulation
References
To setup a simulation we first need a good ear geometry. I decided to download ear canal impression by Nancy/Lanzi Luo, which is licensed under Creative Commons Attribution. Now, this is a scan of an ear cast. Therefore, I needed to edit the file to extract the ear canal. Spoiler: I think I might have overcut the ear canal, resulting in me selecting a smaller portion of it rather than the whole thing.
Then, we need the timpanic membrane impedance.I decided to use Measurement of the eardrum impedance of human ears by Herbert Hudde as a source. This because this paper appears to be one of the very few that has readily usable impedance curves that can be digitalised and used in a simulation, through interpolation. In fact, Lars Birger Nielsen and Mads Herring Jensen did exactly the same thing while working on their model, see The Digital Twin of a New and Standardized Fullband Ear Simulator. It is to be noted that Hudde doesn’t exactly provide the tympanic membrane acoustic impedance. Rather, he provides the acoustic impedance at a representative plane in the ear canal at about 3 mm from the ear canal termination. Therefore, I truncated the very end of the ear canal model and terminated the ear canal with a flat plane of appropriate (for the frequency) impedance.
Setting up the Geometry
Editing the STL File
Chopping the Parts not Needed
If you download Nancy’s model you will be provided with an .stl
file. This is a mesh 3D format. To edit it I decided to use Blender. Blender is an extremely powerful program for 3D graphics. Programs of this kind are perfect to edit mesh data. I will not be able to provide a step-by-step tutorial on how obtain my exact result. The procedure below, however, tells you how to import an stl
in Blender, chop out what you do not need, create a few surfaces when needed, and reduce the number of polygons to something more manageable for further processing.
Open up Blender. Then, press delete
on your keyboard to delete that default cube that will be present in the scene (alternatively, right click it and choose delete
). The Blender window will look like as shown in Figure 1.

Figure 1
Blender in a Clean State
Then, choose File
> Import
> Stl
and open the stl
file. This might take a while as the mesh is very large. After it is loaded up you can use the mouse wheel to zoom in and out. To rotate the view click-hold the wheel and drag the mouse. To pan the view shift and click-hold the wheel and drag the mouse. The imported STL file will look like the one in Figure 2.

Figure 2
STL Loaded into Blender
Now, from the Set the interaction mode
menu on the top left, select Edit mode
, as shown in Figure 3. The application might lag a bit as we are dealing with a very large mesh.

Figure 3
Set the Interaction Mode to Edit Mode
After the interaction mode has finished switching to Edit Mode
all the vertices of the mesh will be selected (they will be all orange). If this is not the case, press A
. We will use the Bisect
tool to chop off parts of the mesh that we do not need. To select the Bisect
tool click and hold the Knife
tool button on the Toolbar
on the left. This will open a tiny menu from which you can select the Bisect
tool, as shown in Figure 4.

Figure 4
Selecting the Bisect Tool
The bisect tool allows us to chop a mesh through a plane. Once the tool is selected we can click and drag to select the bisection plane. The idea is to remove anything from the mesh that is not the ear canal itself. Since the mesh has tons of vertices the operations might a bit sluggish depending on your computer.

Figure 5
First Bisect Cut
Figure 5 illustrates how I made the first cut, along what seems to be the plane of the opening of the ear canal. The yellow arrow and the blue circle allow to fine tune the cutting plane. The menu in the bottom left (which might need being expanded) allows to select which side of the mesh to clear (the inner in the case of my cut) and whether to fill the edge with a new face (which we want to do). To confirm the cut, press Enter
. For a tutorial on the Knife
and Bisect
tools you can refer to this video.
After the first cut you will have to do a few more to clean things up, as well as cutting the last 3 mm of the ear canal away. To locate a 3 mm distance from the ear canal termination we can use the Measure
tool, as shown in Figure 6. Note that the lengths are displayed in metres while they actually are millimetres. We could change the units displayed by Blender but I prefer to keep all units untouched end eventually use coordinate scaling in the Elmer simulation.

Figure 6
Measuring the Distance from the Ear Canal Termination
The exact plane at which to cut is somewhat ambiguous. Hudde goes into the details of the definition of the eardrum impedance. As I mentioned already, Hudde does not quite define the eardrum impedance. This because, at high frequency, the eardrum is not a lumped system. Rather, it is a distributed system that does not vibrate as a whole, but with a complicated resonant superposition pattern. Instead, Hudde selected a plane 2-3 millimetre away from the eardrum since, according to Hudde, at that plane the propagation inside the ear canal is still plane wave, therefore allowing to characterise the entire plane with a single impedance value (that is, the layer of air at that plane, is experiencing lumped motion, therefore we can define an impedance for the entire plane). Quoting Hudde:
Schröter (1976) could actually show by computer simulations and by measurements in an enlarged ear model, that higher modes decrease to a negligible level at a distance of 2-3 mm from the eardrum.
Figure 7 shows an ear canal diagram taken from Hudde’s paper. The diagram is not anatomically accurate but the eardrum does sit at an angle like that in real ear canals too. The diagram suggests that the plane (D
) should be taken 2-3 millimetres away from the edge of the eardrum that is encountered first by sliding a plane orthogonal to the ear canal axis. Or at least, that is my interpretation.

Figure 7
Figure 2 from Measurement of the eardrum impedance of human ears by Herbert Hudde
Now, the issue is that it is not clear on the mesh where the actual eardrum is. In Figure 6 I interpreted that vague feature that I evidenced in orange as the eardrum, and I then measured 2 millimetres from there towards the ear canal. Another option is to measure 3 (or 2) millimetres from the very bottom of the ear canal. Option 1 (2 mm from the weird-maybe-eardrum-feature) will make me cut the ear canal shorter, thus possibly reducing the simulation accuracy if I cut too much. But option 2 breaks the assumptions of the paper I am using as reference for the eardrum impedance. Neither options are particularly appetising so I decided to go for a “hybrid” by cutting at about 6 millimetre from the very bottom of the ear canal. This results in the geometry in Figure 8.

Figure 8
Final Ear Canal Geometry
Decimating the Geometry
As I mentioned, the original ear canal cast stl
file has loads of vertices. Not only this makes operation sluggish, but it makes importing the geometry into a preprocessor like Salome impervious. The Decimate Modifier
can be applied to your geometry to reduce the number of vertices. In Object Mode
, you will find the Add Modifier
menu on the right as shown in Figure 9. The Decimate Modifier
is very good and you might decimate with very low ratios (< 0.1) without loosing much of the detail of the structure).

Figure 9
Adding the Decimate Modifier
It is a good idea to decimate your geometry at some point because we will be using Salome to generate a computational mesh. This mesh will be way coarser than the original one in the stl
file. It can be tempting to keep higher detail in the geometry, so to preserve accuracy as much as possible. However, this would come with the significant drawback of making computational mesh generation harder. With a vertex count as big as the one in the original stl
Salome might find it very hard to handle the project.
Therefore, I went for a 0.0025 ratio to keep the face count below 1000. I also ticked the Triangulate
tick-box in the modifier to have Blender turn all the faces into triangles (by eventually splitting larger surfaces into triangles). Note that this will still happen automatically when exporting to stl
. The decimate modifier settings are shown in Figure 10.

Figure 10
Decimate Modifier Settings
Exporting to stl
At this point we can choose File
> Export
> Stl
to export the final geometry. I named my file Geometry.stl
.
Meshing
Geometry Preprocessing
I like to use Salome for processing the geometry further. To do so, first open Salome. Then, switch to the Geometry
module by using the menu on top.

Figure 11
Selecting the Geometry Module
Then, choose File
> Import
> STL
and navigate to where you exported the final ear canal geometry. The geometry is pretty complicated (16862 faces in my case). If it proves to be too hard on your computer, go black to Blender and crank the decimate Ratio
down.
Due to the very high number of faces, handling this geometry is quite hard end tedious. Luckily, we only need to do it once.
First, the stl
that we loaded is just a shell. We need to turn it into a solid. With the stl
object selected in the Object Browser
on the left, choose New Entity
> Build
> Solid
. This will open the Solid Construction
window. I decided to name my new solid ear_canal
. Then, choose Apply and Close
. My Solid Construction
settings are shown in Figure 11.

Figure 11
Solid Construction Settings
Then, with our new ear_canal
object selected in the Object Browser
on the left, we choose New Entity
> Explode
. This will open the Sub-shapes Selection
window. In the Sub-shape Type
menu select Face
. Then Apply and Close
. This will take some time depending on how many faces you have. My Sub-shapes Selections
settings are shown in Figure 12.

Figure 12
Sub-shapes Selection Settings
Now it comes the tedious part. We have a total of 842 faces. We need to group them and, after we do so, delete them. This way, when we mesh, we will have only broad groups of nodes on which we can apply the boundary conditions. The groups are
inlet
for the ear canal input.outlet
for the ear canal output, or termination.wall
for the ear canal wall.
To do so we select the ear_canal
object from the Object Browser
on the left. Then, we choose New Entity
> Group
> Create Group
. This will open the Create Group
window. We are going to create the inlet
first. In Shape Type
select Face
. In Group Name
type inlet
. While the Create Group
window is open you can still interact with the 3D view. You can use the Interaction style switch
on the top right corner of the 3D viewer to switch between view controls (hold left click
+ drag
to rotate, hold middle click
+ drag
to pan) and selection controls. The Interaction style switch
is shown in Figure 13.

Figure 13
Interaction Style Switch
Rotate the view until you see the ear canal input nice and clean. Then, start selecting the faces on it. I prefer to click each single face while holding shift
, so that I add them one by one. Make sure sneaky little faces do not escape you! When you selected them all click Add
. Then, click Apply and Close
to create the new group. By right clicking the newly created inlet
group in the Object Browser
you can choose Edit Group
. This opens the Edit Group
window that is pretty similar tot he Create Group
window. This window and the faces in my group are shown in Figure 14. This is a good way to check that you got all the right faces. If some face is missing, add it.

Figure 14
Inlet Group
Now that we created the inlet
group we should delete the original faces. Right click the ear_canal
object from the Object Browser
and choose Show Only Children
. Then, hide the inlet
group. Proceed to select the inlet faces and then delete them with del
. A confirmation window will be opened. Confirm and move to the next step.
Do the same for the outlet
group.
The last group is the easiest to create. Only the faces of the walls will be left. Proceed as always but instead of selecting the faces one by one from the geometry select them in one go from the Object Browser
. Click the first one from the list, hold shift
and select the last one from the list. At the end of the process your objects hierarchy will look like that in Figure 15.

Figure 15
Final Object Hierarchy
You can see another reason why decimation was necessary in Blender. This process is already very tedious with a few faces to deal with over the inlet and outlet. A large number of faces (maybe hundreds) over them would have made it extremely tedious, if not impossible. For very complicated geometries perhaps the best approach is the programmatic one, by using Gmsh
or equivalent.
As a last step before moving on to meshing, let’s get the area of the ear canal termination. We need this value to compute the termination acoustic impedance. To do so, select outlet
from the Object Browser
and then, from the top menu, choose Inspection
> Basic Properties
. This will open the Basic Properties Information
window from which you can read the surface value in square millimetre. In my case this is 29.9 square millimetre, as shown in Figure 16.

Figure 16
Outlet Surface
Meshing
Now switch to the Mesh
module. Select the ear_canal
object from the Object Browser
and choose Mesh
> Create Mesh
. This will open the Create mesh
window. The highest frequency of our simulation is 20 kHz. So, in the Name
field type mesh_20kHz
. As Algorithm
select NETGEN 1D-2D-2D
. My Create mesh
settings are shown in Figure 17.

Figure 17
Final Create Mesh Settings
Add a new hypothesis by clicking the gear button next to Hypothesis
. Choose NETGEN 3D Parameters
. This will open the Hypothesis Construction
window. In Name
type hypothesis_20kHz
. As max size we use anything smaller than 1/12th of the wavelength of a 20 kHz wave at room temperature: 1.5 mm. The default size settings will probably be fine in this case. I decided to go for 1 mm for Max. size
and 0.1 mm for Min. size
. For Fineness
I selected Fine
. My Hypothesis Construction
settings are shown in Figure 18.

Figure 18
Hypothesis Construction Settings
Once you have confirmed the options, right click mesh_20kHz
on the Object Browser
and choose Compute
. After this is done, if you expand the mesh_20kHz
object in the Object Browser
, you will see that our 3 groups have become mesh groups. We will be able to apply boundary conditions to these groups. My mesh with the inlet
group in evidence is shown in Figure 19.

Figure 19
Mesh Result
To export the mesh, right click mesh_20kHz
on the Object Browser
and choose Export
> UNV file
. We will also need to export the mesh as a MED
file. To do so, right click mesh_20kHz
on the Object Browser
and choose Export
> MED file
.
Running the Simulation
Now that our mesh is prepared we only need to run the simulation. To do so we can open ElmerGUI
and setup a Scanning
simulation as done in the Rigid Walled Room episode or any of the Home Studio episodes. The difference in this case will be that, for each frequency of the simulation, we need to index into a list of eardrum impedance value too for the termination boundary condition.
This is a perfectly reasonable approach, but not the one I decided to follow for this simulation. Instead, I decided to generate a different sif
file for each frequency by using pyelmer
, store it in a folder and run the simulation. This is implemented in the repository for this episode. This means that the whole simulation is automated by a set of Python
scripts.
I will not be giving a step-by-step explanation on how to write the scripts: for anybody that understands programming and scripting the source code itself should be fairly self explanatory and serve as an example.
Instead, I will break down how the simulation is setup.
The Boundary Conditions
First, we need a way to get the eardrum impedance as a function of frequency. Hudde has a plot of what we need, reported in Figure 20.

Figure 20
Figure 4 from Measurement of the eardrum impedance of human ears by Herbert Hudde
I could have used figure 5 from the same paper as well, but decided instead to use the curve from the first subject in figure 4 (S1
). This shows immediately one of the most common challenges in running models based on anatomy: each human is different. So, what impedance curve to use? Perhaps, a good option could have been the average of all curves. But at the same time, one could reasonably expect the eardrum impedance and the ear canal shape to be related. This because here we are not actually talking about the real eardrum impedance, but the acoustic impedance at a plane 3 mm in front of it. Therefore, the usage of any of these impedance as well as their average is somewhat arbitrary. Not having other options, we can just grab any of these curves.
But first, we need to understand what the curves in Figure 20 are. Hudde is providing the real (resistance) and imaginary (reactance) parts of the $\frac{Z_D}{Z_W}$ quantity. $Z_D$ is what we are after. It is the acoustic impedance (pressure over volume velocity) at the reference plane in the ear canal. $Z_W$ is instead the wave impedance defined by Hudde as follows:
$$ Z_W = \frac{\rho c}{A_D} $$
where $\rho$ is the air density, $c$ is the speed of sound in air and $A_D$ is the area of the ear canal at the reference plane. You are probably aware that the specific acoustic impedance (pressure over particle velocity) of a plane wave is $\rho c$. The acoustic impedance at a surface is obtained from the specific acoustic impedance by dividing it by the area of the surface (Fundamentals of Acoustics, 4th Edition, page 286). Therefore, $Z_W$ really is the acoustic impedance of a plane wave at the reference plane in the ear canal. This is why we had the area $A_D$ computed by Salome in the previous section.
Now, from the The Pulsating Sphere episode we know that Elmer requires as impedance the quantity $Z$ which is the boundary specific acoustic impedance divided by $\rho$. Therefore:
$$ z_D = A_D Z_D $$
$$ Z = \frac{z_D}{\rho} $$
If we call the quantity graphed by Huddle $Z_H$ we have:
$$ Z_H = \frac{Z_D}{Z_W} \space \Rightarrow \space Z_D = Z_H Z_W = Z_H \frac{\rho c}{A_D} $$
$$ \Rightarrow z_D = A_D Z_D = Z_H \rho c $$
$$ \Rightarrow Z = \frac{z_D}{\rho} = \boxed{Z_H c }$$
So, it turns out we did not need $A_D$ after all: once we have a way to get the graphed values $Z_H$ all we need to do to obtain an Elmer compatible impedance value is to multiply it by $c$.
Getting the impedance wrong is one of the most common mistakes, so always watch out for it. The issue is that there are multiple kinds of impedance, and the notation is often ambiguous, so it is always important to ensure the correct impedance is being used. Once again this shows the value of simple benchmark problems such as The Pulsating Sphere, which helps ensuring that the quantity $Z$ derived from the specific acoustic impedance is the one Elmer needs to have. If we set up a wrong value of $Z$ in a benchmark problem the comparison with the analytical solution will fail!
Now, we need to digitalise $Z_D$. Once you got a picture of the $Z_D$ curve you can digitise it with WebPlotDigitizer
. A detailed tutorial of WebPlotDigitizer
is beyond the scope of this article. However, the program is fairly self explanatory and easy to use. After telling the program where the axes are and their ranges (and whether any of the axes are logarithmic) you can drop points along the curve (using your arrow keys for fine adjustment) and when you are done just save your data as csv
. It might seem very time consuming, but it is actually quite fast. And perhaps the latest AI assisted version is even faster and more convenient. However, I did not try that as I am not a fan of closed source software which requires you to open an account. I just went with an older archived open source version. After all, this is Computational Acoustics with Open Source Software.
After you get your csv
file of data points from your plot is is pretty easy to use the scipy.interpolate
module to interpolate through the data points in a good manner. I decided to use the PchipInterpolator
. This because it is a monotone interpolant, which prevents undesired wobbliness in the output.
For reference, you can check my implementation. The plot below shows the eardrum specific acoustic impedance at the reference plane D according to my interpolation.
Click Here for Larger Version
The other boundary conditions are much easier. For the wall of the ear canal I applied a rigid boundary condition (0 normal particle velocity) while for the ear canal input I applied a unit normal particle velocity, uniform at all frequencies. All it is left to figure out is the bounadry IDs for the mesh, so that we can input them into the pyelmer
constructors (geo_ids
argument of the Bondary
class constructor). This is pretty easy: open ElmerGUI and choose File
> Open
to open the UNV
file. Then, choose Model
> Set boundary properties
. Double click any boundary: the ID number of the boundary will be just there in the title of the window. This is illustrated in Figure 21.

Figure 21
Obtaining the Boundary ID with ElmerGUI
. The ID is the Number in the Title Window (3
).
The Solver Settings
The solver is the usual HelmholtzSolver
. You can refer to pretty much all the other episodes, for example the already mentioned Home Studio series, for more basic and step by step tutorials. For this episode I applied all I learned from the other projects, including using p:2
elements and ILUT
preconditioner. The sif
file for the 1 kHz simulation is presented below for reference. Do not forget that the mesh has units expressed in mm, so a coordinate scaling of 0.001 is needed!
Header
CHECK KEYWORDS "Warn"
Mesh DB "." "."
End
Simulation
Max Output Level = 5
Coordinate System = Cartesian
Coordinate Mapping(3) = 1 2 3
Simulation Type = Steady state
Steady State Max Iterations = 1
Output Intervals(1) = 1
Coordinate Scaling = 0.001
Frequency = 1000.0
Solver Input File = case.sif
Post File = case.vtu
End
Constants
Gravity(4) = 0 -1 0 9.82
Stefan Boltzmann = 5.670374419e-08
Permittivity of Vacuum = 8.85418781e-12
Permeability of Vacuum = 1.25663706e-06
Boltzmann Constant = 1.380649e-23
Unit Charge = 1.6021766e-19
End
! Equation-Helmholtz
Equation 1
Active Solvers(1) = 1 ! Solver-Helmholtz,
Frequency = 1000.0
End
! Solver-Helmholtz
Solver 1
Equation = Helmholtz Equation
Procedure = "HelmholtzSolve" "HelmholtzSolver"
Variable = -dofs 2 Pressure Wave
Exec Solver = Always
Stabilize = True
Bubbles = False
Lumped Mass Matrix = False
Optimize Bandwidth = True
Steady State Convergence Tolerance = 1e-05
Nonlinear System Convergence Tolerance = 1e-07
Nonlinear System Max Iterations = 1
Nonlinear System Newton After Iterations = 3
Nonlinear System Newton After Tolerance = 0.001
Nonlinear System Relaxation Factor = 1
Linear System Solver = Iterative
Linear System Iterative Method = BiCGStabl
Linear System Max Iterations = 1000
Linear System Convergence Tolerance = 1e-10
BiCGstabl polynomial degree = 2
Linear System Preconditioning = ILUT
Linear System ILUT Tolerance = 0.001
Linear System Abort Not Converged = True
Linear System Residual Output = 10
Linear System Precondition Recompute = 1
Element = p:2
End
! Material-Air
Material 1
Sound speed = 343.0
Density = 1.205
End
! Body-Air
Body 1
Target Bodies(1) = 1
Equation = 1 ! Equation-Helmholtz
Material = 1 ! Material-Air
End
! Inlet
Boundary Condition 1
Target Boundaries(1) = 1
Wave Flux 1 = 1.0
Wave Flux 2 = 0.0
End
! Rigid
Boundary Condition 2
Target Boundaries(1) = 3
Wave Flux 1 = 0
Wave Flux 2 = 0
End
! Termination
Boundary Condition 3
Target Boundaries(1) = 2
Wave Impedance 1 = 954.1224915644424
Wave Impedance 2 = -695.993457938884
End
Results
The simulation
script produces many simulation directories, each with a SIF and mesh data, in which ElmerSolver
is run, producing a VTU
file. The post
script takes these VTU
files and puts them in order for a ParaView animation (also see the Intro to Paraview episode).
Additionally, the post
script also computes the frequency response between ear canal input and its termination. As explained in the Frequency Responses episode one needs to be careful when estimating frequency responses from harmonic simulations. I will explain the caveats of this simulation in the following sections.
The terminology that I use below assumes that the reader is familiar with how steady state harmonic solutions work. If things are not particularly clear check out the Interpreting Helmholtz Solver Solutions episode.
SPL and Phase
The video above shows the spatial part of the pressure field inside the ear canal as a function of frequency (remember that this is what the Helmholtz equation is used to solve for). On the right the pressure magnitude is shown as SPL. On the left instead the phase of the pressure field is shown in degree, The frequency is reported in the top left corner. This visualisation is performed with ParaView (see the animation.pvsm
file). The field values are encoded with colour, with isosurfaces provided within the ear canal volume. One thing that is immediately obvious is that, at low frequency, the field starts off by being essentially constant throughout the whole volume. Some gradient of pressure starts developing from ~1.5 kHz. From here on some complicated isosurface starts to appear, but the field at both sides of each isosurface is still very similar. It is only from ~3.5 kHz that some significant distribution happen, first with a significant gradient of phase starting to develop at ~3.5 khz, reaching two distinct poles at 6.5 kHz. From here, by the time we reach 8 kHz, the phase distribution has clearly two poles and there is a significant magnitude gradient. As frequency rises the field in the ear canal becomes more and more complicated, with subvolumes of the system exhibiting a pressure disturbance of different amplitude and phase with respect each other.
This behaviour is fully expected. Sound is controlled by partial differential equations, see the Intro to FEniCS - Part 2 episode for a discussion of the Helmholtz equation, the one used in this episode. This is a partial differential equation that describes harmonic pressure waves in linear acoustics. So, we expect to see a complicated field because the quantity we are solving for, steady state harmonic pressure at a given frequency, is a function of all the spatial coordinates. And the geometry is complicated.
However, in confined systems (such an ear canal) if the frequency is so low that the period of the pressure wave is much larger than the time the wave would need to propagate through the system the pressure tends to be constant: it is almost as the entirety of the air inside the system moved in unison without any amount of distribution. This goes by the name of lumped regime, while at high frequency the system is distributed. For a good understanding of lumped system I recommend Active Control of Sound by P.A. Nelson and S.J Elliot. Lumped systems are introduced at page 73 of the first edition. Here we observe the lumped regime gradually morphing into the distributed regime.
Frequency Response
As I explained in the Frequency Responses episode one needs to be careful when it comes to frequency responses. A good introduction to these topics is Chapter 3 of the first edition of the already mentioned Active Control of Sound. In simple terms, one can relate the input of a system to its output by a so called transfer function. The frequency response is related to the transfer function, being obtained from it through a mathematical operation called restriction. The frequency response is a complicated function of frequency which tells us, for each input frequency, how much the action of the system will change its magnitude and phase.
Now, this is very straightforward when you have signals that are function of time only. This happens at lumped regime, when the spatial part of the solution is pretty much constant. As soon as the system starts being distributed things are still straightforward so far as the pressure field is constant (in space) over the input and output of the system. In our case, as long as the pressure distribution is reasonable constant over the surface of the ear canal input and the surface of the ear canal termination then we can still talk about one input pressure signal and one output pressure signal. This is then a SISO (Single Input Single Output) situation. When eventually the pressure shows distribution across these surfaces than this is not the case anymore.
One way to compute the frequency response from Helmholtz simulation results is to take the input pressure and output pressure and compare their magnitudes and phase. This is easily done in this context as all numbers are complicated:
$$ F \left( f \right) = \frac{p_o\left( f \right)}{p_i\left( f \right)} $$
Here, $F$ is the frequency response, a function of the frequency $f$. $p_i$ and $p_o$ are steady state harmonic input and output pressures respectively, them also being functions of frequency. For each frequency $f$ the value of $F$ is complicated (since the pressure field itself is complicated). Its magnitude of $F$ is the gain of the frequency response between input of the ear canal and output of the ear canal (termination). Its phase is the phase of the said frequency response.
Clearly, based on what I said above, what even are the input and output pressures? I decided to take the average of the field over the input surface and the output surface of the ear canal. This is possibly thanks to all the nodes in the mesh being marked by the groups we defined with Salome. So, we can select the field over the inlet
and outlet
groups of nodes and average. I went for this approach because an instrument, such a microphone, effectively averages the pressure over its surface to some extent. It is like I put matched size microphones over input and output of the ear canal, without altering the boundary conditions. A thing that clearly is only possible in simulation. This works very well when the field is constant over the surfaces and can easily lead to strange results if it is not. This is shown in the plot below.
Click Here for Larger Version
The plot above shows the magnitude of $F$ in the top panel (in decibel), and its phase (in radian) in the bottom panel. As we already observed the pressure is quite constant for a wide range of frequencies. Looking at the magnitude we can see that there is little gain (or attenuation) up to ~5 kHz. Phase change always precedes magnitude change. In this case the gradient of phase (known as group delay) clearly becomes much steeper at around ~3 kHz. Things carry on looking somewhat reasonable up until 10 kHz. Not only the curve is smooth, but it compares somewhat well some of the measurements reported in Measurement Apparatus and Modelling Techniques of Ear Canal Acoustics (page 66). The trends in the measurements are similar, with a flatter low frequency region and a broad peak at higher frequency (1 kHz and above). High frequency (10 kHz and above) is fairly chaotic. However, from a quantitative point of view the agreement is not very good. The broad peak in measurements starts to develop at ~3 kHz, while in our simulation it does at ~5 kHz. The chaotic part at 10 kHz is much more chaotic than measurements. Most likely, this is due to the following reasons.
- The shift of the main peak of the frequency response over higher frequency in simulation might be a symptom of the ear canal geometry having been overcut. An overcut will shift all features up in frequency.
- The Helmholtz equation does not account for thermoviscous losses. Adding these losses might smooth up the chaotic high frequency bit and make it more realistic.
- Having taken the average of a complicated not constant field over a surface as a figure of input and output pressure might not lead to accurate results.
- The input boundary condition of uniform velocity is applied all over the ear canal input, meaning that the boundary condition is essentially that of a closed ear canal. This might not be very close to the measurement conditions.
- The ear canal impedance curve selected maybe doesn’t match what we would have measured in Lanzi Luo ear (the source of the geometry). Given that the ear canal shape and its termination impedance must be correlated in some way this might give rise to strange results.
A good way to check the geometry, beside taking some measurements with a 3D editor like Blender, could be to run an eigenvalue analysis of the system. According to multiple sources, for example Fundamentals of Acoustics, 4th Edition, page 312, the fundamental resonance of the ear canal should be ~3 kHz. The complicated nature of $F$ doesn’t allow to clearly see where the fundamental resonance is, which is why the eigenvalue analysis is important. It does seem to be, however, larger than 3 kHz. As for the other issues, one will need to use another governing equation, boundary conditions and input-output pressure determination methods.
This step, that of critique of the simulation results, is extremely important. Checking the agreement with measurements and literature is a very good way to gauge if there is something wrong afoot. The corrective actions are best taken gradually. By doing so one is normally able to figure out what are the most important reasons behind inaccurate simulations. This, in turns, makes it easier to make good simulations in the future.
Conclusion
In this episode I built an ear canal model. I described all the pre-processing steps in a fair amount of detail, but only glossed over the simulation implementation, which now makes use of Python scripts. This because the scripts themselves are the step-by-step procedure, and I think they are easy enough to read for anybody familiar with Python or scripting in general.
The model was built by using measurements of ear canal impedance and a cast of a real ear canal. The simulation was a steady state Helmholtz simulation at many values of frequency. Results make some sense qualitatively, but they are quite chaotic at high frequency and everything happens at frequencies that are higher than expected. This means that multiple aspect of the model need to be fixed, such as the geometry, the way input and output pressures are computed, and the governing equation.
In the next episodes of this series we will look into how improve this model.
License Information

This work is licensed under a Creative Commons Attribution 4.0 International License.