Playing Around With A 2D Wave Algorithm

If anyone reading this is prone to epileptic seizures, you may not want to watch the embedded videos.

The other day I was thinking about the old Hugo Elias 2D Water article and decided to play around with it. He writes about a 2D water effect that handles an arbitrary number of ripples without using wave functions (i.e., Sine, Cos). It does this by performing a per-pixel wave simulation. In that article, what isn’t mentioned is that it’s not just a pretty graphics technique but also a pretty accurate simulation of wave behaviors.

This article is going to be some observations I’ve had while playing with this method. To view these observations, I will be using screenshots and video captures of a test application. Not all the demos shown here are included in it, but it can be downloaded here for Windows if you want to play along.

Sample application creating a focused region and shadow from an emitter inside a parabolic dish. In this visualization mode, red is negative, and green is positive.

Sadly, I currently can’t show it as an embedded web application because WebGL doesn’t support the floating-point textures I used.

The Old Code

A quick refresher, here’s a copypasta of the old pseudocode from Hugo Elias’s website.

 damping = some non-integer between 0 and 1
    begin loop 
        for every non-edge element:
        loop
            Buffer2(x, y) = 
                (
                    Buffer1(x-1,y) +
                    Buffer1(x+1,y) +
                    Buffer1(x,y+1) +
                    Buffer1(x,y-1)
                ) / 2 - Buffer2(x,y)
                
            Buffer2(x,y) = Buffer2(x,y) * damping
    end loop
    Display Buffer2
    Swap the buffers 
end loop

So it involves two large arrays of numbers (scalar fields) that swap between each other to represent the most recent data and 2nd most recent signal data.

Modern Day Code

The code for the editor and simulation can be found here. The sim was implemented in the Unity game engine.

While nothing is stopping us from implementing this the same way he did, that article was written over 20 years ago. Since then, our graphic APIs have come a long way, and stuff like this is done in shader code to leverage the GPU. Because of this, large simulation sizes can be used without breaking a sweat. The original article shows a 256×256 grid done on CPU. I’ve tested 2048×2048 on the GPU (with graphic shaders) without issues; although the size for the sims shown on this page is 1024×1024.

For GPU rendering, I’m using three buffers instead of two. The extra buffer is the texture we’re rendering to (with RTT). For max precision and dynamic range, I’m using 32-bit floating-point textures. All the buffers are 4 component textures (RGBA). That’s probably overkill, but I just mindlessly defaulted to that.

It’s actually more than 3 textures. I’ve dedicated an extra texture for representing inputs, and another texture for obstacles. This makes 5 textures in total. It should also be noted that the Hugo Elias article never talked about obstacles or impedance, that’s something extra I’m adding.

An extra texture is used as the RTT output of the sim. Two extra textures were also added on top of that.

One benefits of having obstacles and inputs as their own separate texture is that can represent these things in our simulation (which I’ll be calling actors from now one) as objects in the 3D world of a game engine (i.e., Unity) which are drawn into the textures with a Camera and the engine’s render-to-texture (RTT) feature.

The term signal and wave will also be used almost synonymously. But, the term signal will be used to imply more intent on the wave being propagated – as opposed to random noise.

This isn’t a tutorial, so we’re not going to get any deeper on my implementation than that. For a simple example of the method in a graphics API, see this Shadertoy entry.
The ShaderToy example is a direct conversion of the HugoElias article and does not contain any extra additions used in this article.

Inputs

The previous demos mentioned are designed to be water ripple effects. When a touch screen is pressed or a mouse is clicked, a quick pulse is injected when the pointer interaction is detected. While the pointer is pressed, a constant signal (DC) is injected every frame, and another reverse pulsed signal is sent when the pointer is released. The reverse pulse isn’t explicitly sent but happens naturally from no longer applying the direct signal.

A direct signal is caused when the pointer touches the simulation. While touched, that constant signal is applied, and a single pulse wave is propagated. Another pulse is propagated when the pointer is released.

But we’re far more interested in its wave dynamics than we are with accomplishing a graphical effect. This includes viewing alternating signals (AC), which are supported by placing objects in the scene that undulate their signal at a certain frequency. To do this, models are drawn with an unlit material whose color is animated over time between negative and positive values using the Sine function. While negative values aren’t common for normal rendering, it’s possible to render and store them with floating-point textures.

By lowering and raising the input value, an alternating signal is propagated. The rate at which we animate the signal controls the frequency of the wave. Note there’s no actual pinching or interaction involved; this is just a metaphor to compliment the previous illustration.

Visualization

Since we’re just focused on its behavior, we’re not going to dress it up with fancy effects. Instead, we need methods of viewing the data to better make sense of what’s going on.

The application has 3 visualization modes named greyscale, red/green, and spectrum. These aren’t exactly technical terms, I’m just naming them off what they look like.

Greyscale is the raw wave data. But since the color value 0 is black, and the wave data can get negative (and we can’t view negative color values because black is as dark as things get), we bias the visualization, so a half grey color represents 0.
Red/Green is a scheme where positive wave data is red, and green wave data is negative. These were chosen because these are the first two color components out of RGB.
Spectrum is a color scheme where the wave data is remapped from [-1,1] to [0,1], and then that value is pulled from a texture map that has rainbow ordered hues (ROYGBIV) commonly used for heatmaps. Because the colors are distinct from each other instead of brighter, different tiers of wave height are more easily observed.

The texture map used for the spectrum visualization.

On a side-note, WordPress is astonishingly allergic to embedding wide, 1pixel tall images into the document.

Left) Greyscale. Middle) Red/Green. Right) Spectrum.

It should be noted that the lower end of the spectrum is red, yellow (from a combination of red and green), and green take up the entire bottom half of the color space. This, along with the red/green visualization, probably are poor choices if the viewer is red-green colorblind.

The Wave Algorithm

The purely mathematical version is well documented on Wikipedia. Like all math articles on Wikipedia, it gets really dense, really fast, so we’re just going to gloss over it. I’m just pointing out that it’s there.

From what I’ve skimmed and understood, this style of simulation is called a finite-difference time-domain method (FDTD) – and the grid is referred to as a Yee grid when in 2D, or Yee lattice when generalized for arbitrary dimensions. A lot of the literature says it’s for simulating wave electrodynamics, but I think the term FDTD still applies when simulating other types of wave, like acoustics (vibrations through physical matter).

Properties

Now for the fun part, watching what this method can do. Let’s check out the various wave phenomena that we can observe with this method. Since I’m not a physicist (just a humble graphics and audio guy who has some fundamentals on light and sound acoustics), there’s probably a bunch of details I’m missing and other properties I could have demonstrated if I knew of them.

Propagation

Propagation is simply the wave’s ability to move outward. Not much more to say about this.

A wave moving through a medium – i.e., wave propagation.

Reflection

When reading and writing data into our scalar fields, we can deny writing values to certain regions of the field. This creates an impassible region, also known as an obstacle or boundary. In the sim, I just set the color values of obstacles to zero. When a wave attempts to travel into these unmodifiable regions, it naturally bounces back.

A wave reflecting off an impassable boundary.

The emitters work by drawing straight onto the input buffer, which is then drawn straight onto the signal buffer. Tragically this means emitters also reflect signals that travel into them. Also, I have not found a robust way to completely prevent reflections at the end of the simulation bounds.

When a reflection of an alternating signal occurs at an angle, you’ll usually notice a grating pattern as the incoming signal passes through the outgoing signal. But when the reflected wave fully-exits, it’s undistorted.

Wave reflection.

In the video above, an emitter is partially blocked so the reflection is easier to see.

Transmission and Refraction

When a wave is passing through a medium, it’s called transmission. And the wave’s speed during transmission will vary based on the medium’s index of refraction (IOR). A medium refers to the material a wave is passing through. For example, when you hear sound, the medium is air.

A wave refracting when entering and exiting a blue medium with a higher IOR than the outside.

One effect is that it will buffer a signal, making it take more-or-less time to pass through than another medium.

Another effect is that it can bend the signal. This is based on 3 things:

  1. The IOR of the material the signal is entering
  2. The IOR of the material the signal is exiting.
  3. The angle that the signal moves through the interface of these two materials.

By interface, we mean the boundary where the mediums come into contact with each other (their boundaries).

The IOR is represented in the alpha channel of the boundary. If the value of the boundary pixel is 0, it’s an obstacle, but if it’s in the range of (0,1), it scales our offset for the convolution. This causes the simulation to read neighboring pixels closer to the signal, which slows down propagation.

A wave refracting through an object. Note how the signal moves slower in the higher IOR medium, which also compresses the wavelength and gives it a higher frequency while inside. Also, note how the signal is redirected to exit from a lower position.

Moving slower will cause a change in wavelength. If the signal enters and medium and exits back out into the original medium, and if the angle isn’t the same, then a (slight) wavelength change will happen; think of a prism dispersion effect or chromatic aberrations.

Because slowing the signal creates resistance for the wave to pass-through, the simulation will reflect some of the wave energy at the interface.
For an interesting related video involving reflections at interfaces, see this Steve Mould video.

Total Internal Reflection

Although the simulation has been pretty impressive, I was skeptical if it would model total internal reflection (TIR), but it does!

As covered in the previous section, when a wave passes an interface between two mediums, it bends from refraction. TIR is what happens when the bending from refraction is so extreme that the refracted wave doesn’t exit the medium. Instead, it reflects back into its current medium.

Total internal reflection: if the IOR ratio between mediums at an interface is high enough, and a wave inside the higher IOR (blue) hitting the interface is glancing, it curves right back in instead of exiting the medium.

For this to happen, the difference between IORs at the interface needs to be a high ratio, and the angle the wave approaching the interface needs to be glancing (i.e., not directly moving into it, but rather at a shallow angle).
For more information on TIR, check out this video from Smarter Every Day.

Total internal reflection prevents signals escaping if the angle of travel glances the IOR interface instead of passes through at a direct angle.

In the video above, we can see the signal bounce inside at the interface. Still, the signal never escapes unless the wave’s exit direction is perpendicular enough to the interface’s surface. In fact, if you notice how it pulses at the interface, you can see it’s from the wave turning back inwards.

This is the premise of how waveguides work, such as fiber optics.

Fun fact, in the movie Finding Dory, Pixar had to cheat the physics to get certain water-tank camera shots. They had a problem where their shaders were so physically accurate that from the camera’s view of Dory looking out from inside an aquarium, TIR would have just shown a reflection of Dory on the glass.

Gradient IORs and Mirages

There’s also the question of “what happens if I have an IOR gradient instead of a hard interface”?

A mirage bending a wave over a series of refractions in a medium (or multiple mediums) that form a gradated IOR. In this image, the darker the blue, the higher the IOR.

If you do a lot of driving in the hot sun (Houstonian here), you’ll often notice reflective regions on the pavement that seem to disappear the closer you get to them. These are mirages, and they’re related to TIR. We see the same mirages in many cartoons where a character is dying of thirst in a desert and sees an oasis in the distance. The reflection looks like water in the distance (usually, they’re drawn with palm trees, but that’s an added exaggeration). But what we’re really seeing is the light from above curving upwards before reaches the ground.
The biggest difference between mirages and TIR is that mirages happen on an IOR gradient while TIR happens at a sharp IOR interface change.

An example of the roads looking like a reflective liquid from a mirage – but on urban roads instead of in a cartoon desert.

This happens because the air temperature affects the air’s IOR, and there’s a sharp gradient near the hot pavement (where you’ll notice heat haze). And it only happens far away because the ray coming into your eye needed to come from a “reflection” on the ground at a very glancing angle.

Example of a signal curving into a mirage.

While a perfect per-pixel gradient is possible, I went with a graduated set of blocks similar to the diagram above for simplicity. This choppiness of the gradient does mean the curves will also be choppy, though. If the gradient were graduated at the per-pixel level, the curves would be perfectly round (in theory).

Diffraction

Diffraction is a wave’s ability to redirect itself when moving around and corners and small openings.

When a wave moves through a small opening, we can see the wave direction changes, guided by the direction of the opening.

Wave diffraction: small amount of the wave’s energy leaking and being redirected by a corner or small opening.

When it’s moving around corners of large open spaces, we can also notice diffraction. The wave will move into shadowed regions that would normally be occluded if the signal moved as a particle. But as it moves in towards the shadow regions the signal gets fainter.

Diffraction.

In the video above we can see a few slit demos. Try to ignore the reflections. It’s performed at a few angles to show the diffraction direction is guided by the slit, regardless of which way a wave was previously traveling.
I also find it interesting in that video how the wave looks like it’s getting through the holes in angled barriers by being “cleaved” at the corner’s edge.

Also, notice the more perpendicular the entrance, the weaker the signal that gets through.

Parabolic Focusing

Waves can be focused with a parabolic dish.

Waves can be redirected to a single converging point using a parabolic dish.

This is done to focus all the signal from a specific direction into a single spot where a sensor can receive that boosted signal. We see this a lot for focusing EM signals, such as for a TV satellite dish, but it applies to other things such as audio and microphones.

Focusing signals coming from a specific direction with a dish reflector.

Notice how focusing the signal creates a stronger signal than what was originally distributed across the dish.

Interference

Waves pass completely through each other without affecting each other. But, when waves line up, their combined signal can be considered a new wave that’s the combination of them both. If the waves are the same frequency and moving in the same direction, the result of what happens depends on their phase.

This is called wave interference.

If they have the same phase, wave construction occurs and the combined wave has an amplitude equal to the sum of the contributing waves. If the waves are half a phase apart from each other wave, destruction occurs and the combined wave is the difference of both waves’ amplitude. If they had the same amplitudes, the waves completely cancel each other out.

Often the waves don’t neatly line up in every way but in the chaos of bouncing and traveling every-which-way, certain overlapping waves will cancel-out or construct.

Wave interference is used to beam-form vertical cones.

In the video above, two emitters are combined to create vertical conical beams. This is created by placing emitters close to each other in a certain arrangement, where construction and destruction happen at intended angles to direct the signal. A front lobe is created with a symmetrical back lobe. There are also two side lobes: one on the left and one on the right.

Standing Waves

Standing waves are created when a wave bounces back and forth in a line along a distance that’s an exact integer multiple of half the wave’s wavelength.
Example distances that are a multiple of half the wavelength include a single half (0.5x), the wavelength (1x), 1.5x, 2.0x, 2.5x, 3.0, etc.
This causes the wave to appear not to move, but instead oscillate in place.

If a wave is repeatedly reflecting back and forth along a distance exactly an integer multiple of half its wavelength, it will appear to oscillate in place. In this specific illustration, the distance is 3x the wavelength.
A contained standing wave, and a non-standing wave.

On the left is a standing wave (or as close as I could get to one). At a certain point, because of construction, the values get so big that we can’t even see the gradient of the wave; it just looks like it’s just pulsing from pure red to green. On the right is a non-standing wave; it seems to phase between strobing behaviors like unsynced car blinkers.

A great example of standing waves is using ultrasonic audio (sound waves whose wavelengths are beyond the range of human hearing) to levitate small beads with air, such as in this video.

Absorption

Certain mediums may absorb a signal, or as the original article calls it, “damping.” I didn’t do this on a per-medium basis, but this could have been implemented in the same way as per-medium refraction, especially since the sim uses RGBA textures, and I have a few unused channels left.

As a wave travels through a medium, it can loose energy the longer it propagates.

This gradual decay in energy as it passes through a medium is also referred to as attenuation.

I imagine there’s more to absorption than naively lowering the signal, but all that’s outside my understanding, so we’ll move on

Respecting Obstacle Scale

When a wave is partially blocked by an obstacle, what happens next depends on the size of the obstacle and the wave’s wavelength.

If the wavelength is wide enough, it will pass through. But if the wavelength is too short, it will be blocked. We can see the same thing happening in the simulation.

Low frequencies passing through small objects unaffected. The higher the frequency, the more prominent the occluded shadow regions.

In the simulation, a low-frequency wave passing through a small obstacle will have a small shadow region, but then it pretty much “heals” itself through diffraction. On the other hand, high frequencies will have a shadow region afterward that doesn’t cover over.

Lensing and Collimation

Knowing that a wave changes based on its angle and the IOR of a medium, we can create objects to lens a signal. If a signal is made more into a (tighter) column shape, it’s called collimation.

Lensing an emitted wave into a column. For fun, I reflect the column.

In this 2D sim, with a high enough frequency and some lensing actors, we can see similar effects.

When we think of lenses, we often think of light and EM passing through a transparent medium. But this can also occur in other types of waves. For example, if you were imagining light going through a glass lens in the video above – instead imagine sound waves being lensed with gas contained in a balloon.
In this hypothetical balloon scenario, the frequency being heard wouldn’t change. The emitter would have to be inside the lens for that to happen, in case you’re thinking of a squeaky helium voice.

Shockwave Cones

From how we’re doing the convolution (i.e., how we’re adding the value of neighboring pixels in our calculations), the information in any given pixel will only affect 1-pixel distance in a simulation step. This means the speed of propagation (a.k.a., phase velocity) is 1 pixel/simulation-step.

If an emitter is moved more than 1 pixel per step, a cone will form. And if moved too drastically, interreflections at the simulation edges are amplified, and the entire simulation becomes unstable.

The most obvious example of a wave emitter moving faster than the speed of wave propagation is a sonic boom.

Another example is Cherenkov radiation. The speed of light in a vacuum is the fastest speed allowed in the universe. But the speed light travels through a medium is slower than that, based on the medium’s IOR. Particles have been seen to move faster than light waves when propagating in some mediums – this shock emits visible light.

Dragging an emitter more than 1 pixel per frame.

Reciprocity

When a wave travels through an environment (propagating, reflecting, refracting, etc.), the observation that it traveled a certain way from a start position to an ending position also means the exact opposite path is traversable. This is called wave reciprocity.

Under most conditions, the path a wave travels through an environment is reversible.

When talking specifically about light and EM radiation, this is called Helmholtz reciprocity.
Pulling a quote from Wikipedia that explains it perfectly in a nutshell, “If I can see you, you can see me.”
I also like the definition on Wikipedia:
“The Helmholtz reciprocity principle describes how a ray of light and its reverse ray encounter matched optical adventures, such as reflections, … .”
Optical adventures, that sounds so whimsical!

To reverse the direction of the waves in the simulation, I swap the most current signal texture and the previous texture. This reverses the flow of time for a single simulation step. Afterward, that change in velocity keeps propagating as we keep iterating through the simulation.

By reversing the direction signals propagate, can see how a wave’s path is reversible.

In the video above, the signal isn’t completely reversed; it’s not like watching a perfect rewind playback. Some residual waves are from the signal propagating past where it started. There are probably also some subtle simulation artefacts. Also, some of the initial signals move outside the edges of the simulation and escape – when reversing, we can’t re-introduce them back into the system; but they’re probably needed to deconstruct the residual waves seen at the end of the video.

Entering The Stargate

While I’ve been playing around with all this, Stargate SG1 recently became syndicated on Netflix, and I’ve been going through a nostalgic binge session.

I noticed with the soldiers walking through the Stargates that the rippling actually contours their bodies as they enter. I wouldn’t be too surprised if they had a similar wave simulation with some random waves for the default gate pattern (outside the visible gate) and input the cross-section of things going through as the inputs or obstacles to get that fluid entrance.

Yes, I realize it’s not the pinnacle of production value – but the mechanics for the flowy dynamics seem about right. You just have to imagine technical directors with the appropriate skills, tools, time, and salary doing the effect.

– Stay strong, code on. William Leu
Thanks to Lexon for reviewing.

“Hot road mirage” by Broken Inaglory is licensed under CC BY-SA 3.0