1332 stories
·
1 follower

See it with your lying ears

1 Share

For the past couple of weeks, I couldn’t shake off an intrusive thought: raster graphics and audio files are awfully similar — they’re sequences of analog measurements — so what would happen if we apply the same transformations to both?…

Let’s start with downsampling: what if we divide the data stream into buckets of n samples each, and then map the entire bucket to a single, averaged value?

for (pos = 0; pos < len; pos = win_size) {
    
  float sum = 0;
  for (int i = 0; i < win_size; i++) sum += buf[pos + i];
  for (int i = 0; i < win_size; i++) buf[pos + i] = sum / win_size;

}

For images, the result is aesthetically pleasing pixel art. But if we do the same audio… well, put your headphones on, you’re in for a treat:

The model for the images is our dog, Skye. The song fragment is a cover of “It Must Have Been Love” performed by Effie Passero.

If you’re familiar with audio formats, you might’ve expected this to sound different: a muffled but neutral rendition associated with low sample rates. Yet, the result of the “audio pixelation” filter is different: it adds unpleasant, metallic-sounding overtones. The culprit is the stairstep pattern in the resulting waveform:

Not great, not terrible.

Our eyes don’t mind the pattern on the computer screen, but the cochlea is a complex mechanical structure that doesn’t measure sound pressure levels per se; instead, it has clusters of different nerve cells sensitive to different sine-wave frequencies. Abrupt jumps in the waveform are perceived as wideband noise that wasn’t present in the original audio stream.

The problem is easy to solve: we can run the jagged waveform through a rolling-average filter, the equivalent of blurring the pixelated image to remove the artifacts:

But this brings up another question: is the effect similar if we keep the original 44.1 kHz sample rate but reduce the bit depth of each sample in the file?

/* Assumes signed int16_t buffer, produces n + 1 levels for even n. */

for (int i = 0; i < len; i++) {

  int div = 32767 / (levels / 2);
  buf[i] = round(((float)buf[i]) / div) * div;

}

The answer is yes and no: because the frequency of the injected errors will be on average much higher, we get hiss instead of squeals:

Also note that the loss of fidelity is far more rapid for audio than for quantized images!

As for the hiss itself, it’s inherent to any attempt to play back quantized audio; it’s why digital-to-analog converters in your computer and audio gear typically need to incorporate some form of lowpass filtering. Your sound card has that, but we injected errors greater than what the circuitry was designed to mask.

But enough with image filters that ruin audio: we can also try some audio filters that ruin images! Let’s start by adding a slightly delayed and attenuated copy of the data stream to itself:

for (int i = shift; i < len; i++)
  buf[i] = (5 * buf[i] + 4 * buf[i - shift]) / 9;

Check it out:

For photos, small offsets result in an unappealing blur, while large offsets produce a weird “double exposure” look. For audio, the approach gives birth to a large and important family of filters. Small delays give the impression of a live performance in a small room; large delays sound like an echo in a large hall. Phase-shifted signals create effects such as “flanger” or “phaser”, a pitch-shifted echo sounds like a chorus, and so on.

So far, we’ve been working in the time domain, but we can also analyze data in the frequency domain; any finite signal can be deconstructed into a sum of sine waves with different amplitudes, phases, and frequency. The two most common conversion methods are the discrete Fourier transform and the discrete cosine transform, but there are more wacky options to choose from if you’re so inclined.

For images, the frequency-domain view is rarely used for editing because almost all changes tend to produce visual artifacts; the technique is used for compression, feature detection, and noise removal, but not much more; it can be used for sharpening or blurring images, but there are easier ways of doing it without FFT.

For audio, the story is different. For example, the approach makes it fairly easy to build vocoders that modulate the output from other instruments to resemble human speech, or to develop systems such as Auto-Tune, which make out-of-tune singing sound passable.

In the earlier article, I shared a simple implementation of the fast Fourier transform (FFT) in C:

void __fft_int(complex* buf, complex* tmp, 
               const uint32_t len, const uint32_t step) {

  if (step >= len) return;
  __fft_int(tmp, buf, len, step * 2);
  __fft_int(tmp + step, buf + step, len, step * 2);

  for (uint32_t pos = 0; pos < len; pos += 2 * step) {
    complex t = cexp(-I * M_PI * pos / len) * tmp[pos + step];
    buf[pos / 2] = tmp[pos] + t;
    buf[(pos + len) / 2] = tmp[pos] - t;
  }

}

void in_place_fft(complex* buf, const uint32_t len) {
  complex tmp[len];
  memcpy(tmp, buf, sizeof(tmp));
  __fft_int(buf, tmp, len, 1);
} 

Unfortunately, the transform gives us decent output only if the input buffer contains nearly-steady signals; the more change there is in the analysis window, the more smeared and intelligible the frequency-domain image. This means we can’t just take the entire song, run it through the aforementioned C function, and expect useful results.

Instead, we need to chop up the track into small slices, typically somewhere around 20-100 ms. This is long enough for each slice to contain a reasonable number of samples, but short enough to more or less represent a momentary “steady state” of the underlying waveform.

An example of FFT windowing.

If we run the FFT function on each of these windows separately, each output will tell us about the distribution frequencies in that time slice; we can also string these outputs together into a spectrogram, plotting how frequencies (vertical axis) change over time (horizontal axis):

Audio waveform (top) and its FFT spectrogram view.

Alas, the method isn’t conductive to audio editing: if we make separate frequency-domain changes to each window and then convert the data back to the time domain, there’s no guarantee that the tail end of the reconstituted waveform for window n will still line up perfectly with the front of the waveform for window n + 1. We’re likely to end up with clicks and other audible artifacts where the FFT windows meet.

A clever solution to the problem is to use the Hann function for for windowing. In essence, we multiply the waveform in every time slice by the value of y = sin2(t), where t is scaled so that each window begins at zero and ends at t = π. This yields a sinusoidal shape that has a value of zero near the edges of the buffer and peaks at 1 in the middle:

The Hann function for FFT windows.

It’s hard to see how this would help: the consequence of the operation is that the input waveform is attenuated by a repeating sinusoidal pattern, and the same attenuation will carry over to the reconstituted waveform after FFT.

The trick is to also calculate another sequence of “halfway” FFT windows of the same size that overlap the existing ones (second row below):

Adding a second set of FFT windows.

This leaves us with one output waveform that’s attenuated in accordance with the repeating sin2 pattern that starts at the beginning of the clip, and another waveform that’s attenuated by an identical sin2 pattern shifted one-half of the cycle. The second pattern can be also written as cos2.

With this in mind, we can write the equations for the reconstituted waveforms as:

If we sum these waveforms, we get:

This is where we wheel out the Pythagorean identity, an easily-derived rule that tells us that the following must hold for any x:

In effect, the multiplier in the earlier equation for the summed waveform is always 1; the Hann-induced attenuation cancels out.

At the same time, because the signal at the edges of each FFT window is attenuated to zero, we get rid of the waveform-merging discontinuities. Instead, the transitions between windows are gradual and mask any editing artifacts.

Where was I going with this? Ah, right! With this trick up of our sleeve, we can goof around in the frequency domain to — for example — selectively shift the pitch of the vocals in our clip:

Source code for the effect is available here. It’s short and easy to experiment with.

I also spent some time approximating the transform for the dog image. In the first instance, some low-frequency components are shifted to higher FFT bins, causing spurious additional edges to crop up and making Skye look jittery. In the second instance, the bins are moved in the other direction, producing a distinctive type of blur.

PS. Before I get hate mail from DSP folks, I should note that high-quality pitch shifting is usually done in a more complex way. For example, many systems actively track the dominant frequency of the vocal track and add correction for voiceless consonants such as “s”. If you want to down a massive rabbit hole, this text is a pretty accessible summary.

As for the 20 minutes spent reading this article, you’re not getting that back.

Subscribe now



Read the whole story
mrmarchant
14 hours ago
reply
Share this story
Delete

How Did TVs Get So Cheap?

1 Share

You’ve probably seen this famous graph that breaks out various categories of inflation, showing labor-intensive services getting more expensive during the 21st century and manufactured goods getting less expensive.

One of the standout items is TVs, which have fallen in price more than any other major category on the chart. TVs have gotten so cheap that they’re vastly cheaper than 25 years ago even before adjusting for inflation. In 2001, Best Buy was selling a 50 inch big screen TV on Black Friday for $1100. Today a TV that size will set you back less than $200.

Via Amazon.

The plot below shows the price of TVs across Best Buy’s Black Friday ads for the last 25 years. The units are “dollars per area-pixel”: price divided by screen area times the number of pixels (normalized so that standard definition = 1). This is to account for the fact that bigger, higher resolution TVs are more expensive. You can see that, in line with the inflation chart, the price per area-pixel has fallen by more than 90%.

This has prompted folks to wonder, how exactly did a complex manufactured good like the TV get so incredibly cheap?

It was somewhat more difficult than I expected to suss out how TV manufacturing has gotten more efficient over time, possibly because the industry is highly secretive. Nonetheless, I was able to piece together what some of the major drivers of TV cost reduction over the last several decades have been. In short, every major efficiency improving mechanism that I identify in my book is on display when it comes to TV manufacturing.

How an LCD TV works

Since 2000, the story of TVs falling in price is largely the story of liquid crystal display (LCD) TVs going from a niche, expensive technology to a mass-produced and inexpensive one. As late as 2004, LCDs were just 5% of the TV market; by 2018, they were more than 95% of it.

Liquid crystals are molecules that, as their name suggests, form regular, repetitive arrangements (like crystals) even as they remain a liquid. They exhibit two other important characteristics that together can be used to construct a display. First, the molecules can be made to change their orientation when an electric field is applied to them. Second, if polarized light (light oscillating within a single plane) passes through a liquid crystal, its plane of polarization will rotate, with the amount of rotation depending on the orientation of the liquid crystal.

Liquid crystal rotating the plane of polarization of light, via Chemistry LibreTexts.

LCD screens use these phenomena to build a display. Each pixel in an LCD TV contains three cells, which are each filled with liquid crystal and have either a red, green, or blue color filter. Light from behind the screen (provided by a backlight) first passes through a polarizing filter, blocking all light except light within a particular plane. This light then passes through the liquid crystal, altering the light’s plane of polarization, and then through the color filter, which only allows red, green, or blue light to pass. It then passes through another polarizing filter at a perpendicular orientation to the first. This last filter will let different amounts of light through, depending on how much its plane of polarization has been rotated. The result is an array of pixels with varying degrees of red, blue, and green light, which collectively make up the display.

Structure of an LCD screen, via Nano Banana.

On modern LCD TVs, the liquid crystals are combined together with a bunch of semiconductor technology. The backlight is provided by light emitting diodes (LEDs), and the electric field to rotate the liquid crystal within each cell is controlled by a thin-film transistor (TFT) built up directly on the glass surface.

Some LCD screens, known as QLED, use quantum dots in the backlight to provide better picture quality, but these otherwise work very similarly to traditional LCD screens. There are also other types of display technology used for TVs, such as organic LEDs (OLED), that don’t use liquid crystal at all, but today these are still a small (but rising) fraction of total TV sales.

It took decades for LCDs to become the primary technology used for TV screens. LCDs first found use in the 1970s in calculators, then other small electronic devices, then watches. By the 1980s they were being used for small portable TV screens, and then for laptop and computer screens. By the mid-1990s LCDs were displacing cathode ray tube (CRT) computer monitors, and by the early 2000s were being used for larger TVs.

Steadily falling LCD TV Cost

When LCD TVs first appeared, they were an expensive, luxury product. In this 2003 Black Friday ad, Best Buy is selling a 20 inch LCD TV (with “dazzling 640x480 resolution”) for $800. The same ad has a 27 inch CRT TV on sale for $150. (I remember wanting to buy an LCD TV when I went to college in 2003, but settling for a much cheaper CRT).

How did the cost of LCD TVs come down?

LCD TVs start life as a large sheet of extremely clear glass, known as “mother glass”, manufactured by companies like Corning. Layers of semiconductor material are deposited onto this glass and selectively etched away using photolithography, producing the thin film transistors that will be used to control the individual pixels. Once the transistors have been made, the liquid crystal is deposited into individual cells, and the color filter (built up on a separate sheet of glass) is attached. The mother glass is then cut into individual panels, and the rest of the components — polarizing filters, circuit boards, backlights — are added.

LCD manufacturing process, via RJY Display.

A key aspect of this process is that many manufacturing steps are performed on the large sheets of mother glass, before it’s been cut into individual display panels. And over time, these mother glass sheets have gotten larger and larger. The first “Generation 1” sheets of mother glass were around 12 inches by 16 inches. Today, Generation 10.5 mother glass sheets are 116 by 133 inches, nearly 100 times as large.

Scaling up the size of mother glass sheets has been a major challenge. The larger the sheet of glass, and the larger the size of the display being cut from it, the more important it becomes to eliminate defects and impurities. As a result, manufacturers have had to find ways to keep very large surfaces pristine — LCDs today are manufactured in cleanroom conditions. And larger sheets of glass are more difficult to move. Corning built a mother glass plant right next to a Sharp LCD plant to avoid transportation bottlenecks and allow for increasingly large sheets of mother glass.

However, there are substantial benefits to using larger sheets of glass. Due to geometric scaling effects, it’s more efficient to manufacture LCDs from larger sheets of mother glass, as the cost of the manufacturing equipment rises more slowly than the area of the glass panel. Going from Gen 4 to Gen 5 mother glass sheets reduced the cost per diagonal inch of LCD displays by 50%. From Gen 4 to Gen 8, the equipment costs per unit of LCD panel area fell by 80%. Mother glass scaling effects have, as I understand it, been the largest driver of LCD cost declines.

Via Corning.

LCDs have thus followed a similar path to semiconductor manufacturing, where an important driver of cost reduction has been manufacturers using larger and larger silicon wafers over time. In fact, sheets of mother glass have grown in size much faster than silicon wafers for semiconductor manufacturing:

Via Corning.

LCDs are thus an interesting example of costs falling due to the use of larger and larger batch sizes. Several decades of lean manufacturing and business schools assigning “The Goal” have convinced many folks that you should always aim to reduce batch size, and that the ideal manufacturing process is “one piece flow” where you’re processing a single unit at a time. But as we see in several processes — semiconductor manufacturing, LCD production, container shipping — increasing your batch size can, depending on the nature of your process, result in substantial cost savings.

At the same time, we do see a tendency towards one piece flow at the level of mother glass panels. Early LCD fabs would bundle mother glass sheets together into cassettes, and then move those cassettes through subsequent steps of the manufacturing process. Modern LCD fabs use something much closer to a continuous process, where individual sheets of mother glass move through the process one at a time.

Outside of larger and larger sheets of mother glass, there have been numerous other technology and process improvements that have allowed LCD costs to fall:

  • “Cluster” plasma-enhanced chemical vapor deposition (PECVD) machines were developed in the 1990s for depositing thin film transistor materials. These machines were much faster, and required much less maintenance, than previous machines.

  • Manufacturers have found ways to reduce the number of process steps required to create thin-film transistors. Early operations required eight separate masking steps to build up the transistors. This was eventually reduced to four.

  • Thanks to innovations like moving manufacturing operations into cleanrooms and replacing manual labor with robots, yields have improved. Early LCD manufacturing often operated at 50% yield, where modern operations achieve 90%+ yields.

  • Cutting efficiency – the fraction of a sheet of mother glass that actually goes into a display — has increased, thanks to strategies like Multi-Model Glass, which allows manufacturers to cut displays of different sizes from the same sheet of mother glass.

  • The technology for filling panels with liquid crystal has improved. Until the early 2000s, displays were filled with liquid crystal through capillary action: small gaps were left in the sealant used to create the individual crystal cells, which the liquid crystal would gradually be drawn into. It could take hours, or even days, to fill a panel with liquid crystal. The development of the “one drop fill” method — a process in which each cell is filled before the sealant was cured, and then UV light is used to cure the sealant — reduced the time required to fill a panel from days to minutes.

  • Glass substrates were gradually made more durable, which reduced defects and allowed for more aggressive, faster etching.

Strategies for LCD manufacturing improvement, circa 2006. Via FPD 2006 Yearbook.

More generally, because LCD manufacturing is very similar to semiconductor manufacturing , the industry has been able to benefit from advances in semiconductor production. (As one industry expert noted in 2005, “[t]he display manufacturing process is a lot like the semiconductor manufacturing process, it’s just simpler and bigger.”) LCD manufacturing has relied on equipment originally developed for semiconductor production (such as steppers for photolithography), and has tapped semiconductor industry expertise for things like minimizing contamination. Some semiconductor manufacturers (like Sharp) have later entered the LCD manufacturing market.

LCD manufacturing has also greatly benefitted from economies of scale. A large, modern LCD fab will cost several billion dollars and produce over a million displays a day.1 It’s thanks to the enormous market for LCD screens that these huge, efficient fabs can be justified, and the investment in new, improved process technology can be recouped.

Annual LCD production, via Corning.

Falling LCD costs have also been driven by relentless competition. A 2014 presentation from Corning states that LCD “looks like a 25 year suicide pact for display manufacturers.” Manufacturers have been required to continuously make enormous investments in larger fabs and newer technology, even as profit margins are constantly threatened (and occasionally turning negative). This seems to have partly been driven by countries considering flat panel display manufacturing a strategic priority — the Corning presentation notes that manufacturing investments have been driven by nationalism, and there were various efforts to prop up US LCD manufacturing in the 90s and 2000s for strategic reasons.

Conclusion

Those who have read my book will not find much surprising in the story of TV cost declines. Virtually all the major mechanisms that can drive efficiency improvements — improving technology and overlapping S-curves, economies of scale (including geometric scaling effects), eliminating process steps, reducing variability and improving yield, advancing towards continuous process manufacturing — are on display here.

1

Though many of these will be phone-sized displays.



Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Zeno’s Paradox resolved by physics, not by math alone

1 Share

The fastest human in the world, according to the Ancient Greek legend, was the heroine Atalanta. Although she was a famous huntress who joined Jason and the Argonauts in the search for the golden fleece, she was most renowned for the one avenue in which she surpassed all other humans: her speed. While many boasted of how swift or fleet-footed they were, Atalanta outdid them all. No one possessed the capabilities to defeat her in a fair footrace. According to legend, she refused to be wed unless a potential suitor could outrace her, and remained unwed for a very long time. Arguably, if not for the intervention of the Goddess Aphrodite, she would have avoided marriage for the entirety of her life.

Aside from her running exploits, Atalanta was also the inspiration for the first of many similar paradoxes put forth by the ancient philosopher Zeno of Elea about how motion, logically, should be impossible. The argument goes something like this:

  • To go from her starting point to her destination, Atalanta must first travel half of the total distance.
  • To travel the remaining distance, she must first travel half of what’s left over.
  • No matter how small a distance is still left, she must travel half of it, and then half of what’s still remaining, and so on, ad infinitum.
  • With an infinite number of steps required to get there, clearly she can never complete the journey.
  • And hence, Zeno states, motion is impossible: Zeno’s paradox.

While there are many variants to this paradox, this is the closest version to the original that survives. Although it may be obvious what the solution is — that motion is indeed possible — it’s physics, not merely mathematics alone, that allows us to see how this paradox gets resolved.

A sculpture of Atalanta, the fastest person in the world, running in a race. If not for the trickery of Aphrodite and the allure of the three golden apples, which she stopped to pick up each time her opponent dropped one, nobody would have ever defeated Atalanta in a fair footrace.

Credit: Pierre Lepautre/Jebulon of Wikimedia Commons

The oldest “solution” to the paradox was put forth based on a purely mathematical perspective. The claim admits that, sure, there might be an infinite number of jumps that you’d need to take, but each new jump gets smaller and smaller than the previous one. Therefore, as long as you could demonstrate that the total sum of every jump you need to take adds up to a finite value, it doesn’t matter how many chunks you divide it into.

For example, if the total journey is defined to be 1 unit (whatever that unit is), then you could get there by adding half after half after half, etc. The series ½ + ¼ + ⅛ + … does indeed converge to 1, so that you eventually cover the entire needed distance if you add an infinite number of terms. You can prove this, cleverly, by subtracting the entire series from double the entire series as follows:

  • (series) = ½ + ¼ + ⅛ + …
  • 2 * (series) = 1 + ½ + ¼ + ⅛ + …
  • therefore, [2 * (series) – (series)] = 1 + (½ + ¼ + ⅛ + …) – (½ + ¼ + ⅛ + …) = 1.

That seems like a simple, straightforward, and compelling, solution, doesn’t it?

By continuously halving a quantity, you can show that the sum of each successive half leads to a convergent series: one entire “thing” can be obtained by summing up one half plus one fourth plus one eighth, etc.

Credit: Public Domain

Unfortunately, this “solution” only works if you make certain assumptions about other aspects of the problem that are never explicitly stated. This mathematical line of reasoning is only good enough to robustly show that the total distance Atalanta must travel converges to a finite value, even if it takes her an infinite number of “steps” or “halves” to get there. It doesn’t tell you anything about how long it takes her to reach her destination, or to take that potentially infinite number of steps (or halves) to get there. Unless you make additional assumptions about time, you can’t solve Zeno’s paradox by appealing to the finite distance aspect of the problem.

How is it possible that time could come into play and ruin what appears to be a mathematically elegant and compelling “solution” to Zeno’s paradox?

Because there’s no guarantee that each of the infinite number of jumps you need to take — even to cover a finite distance — occurs in a finite amount of time. If each jump took the same amount of time, for example, regardless of the distance traveled, it would take an infinite amount of time to cover whatever tiny fraction-of-the-journey remains. Under this line of thinking, it might still be impossible for Atalanta to reach her destination.

Zeno's Paradox

One of the many representations (and formulations) of Zeno of Elea’s paradox relating to the impossibility of motion. It was only through a physical understanding of distance, time, and their relationship that this paradox was able to be robustly resolved.
Credit: Martin Grandjean/Wikimedia Commons

Many thinkers, both ancient and contemporary, tried to resolve this paradox by invoking the idea of time. One attempt was made a few centuries after Zeno by the legendary mathematician Archimedes, who argued the following.

  • It must take less time to complete a smaller distance jump than it does to complete a larger distance jump.
  • And therefore, if you travel a finite distance, it must take you only a finite amount of time.
  • If that’s true, then Atalanta must finally, eventually reach her destination, and thus, her journey will be complete.

Only, this line of thinking is not necessarily mathematically airtight either. It’s eminently possible that the time it takes to finish each step will still go down: half the original time for the first step, a third of the original time for the next step, a quarter of the original time for the subsequent step, then a fifth of the original time, and so on. However, if that’s the way your “step duration” decreases, then the total journey will actually wind up taking an infinite amount of time. You can check this for yourself by trying to find what the series [½ + ⅓ + ¼ + ⅕ + ⅙ + …] sums to. As it turns out, the limit does not exist: this is a diverging series, and the sum tends toward infinity.

The harmonic series, as shown here, is a classic example of a series where each and every term is smaller than the previous term, but the total series still diverges: i.e., has a sum that tends toward infinity. It is not enough to contend that time jumps get shorter as distance jumps get shorter; a quantitative relationship is necessary.

Credit: Public Domain

It might seem counterintuitive, but even though Zeno’s paradox was initially conceived as a purely mathematical problem, pure mathematics alone cannot solve it. Mathematics is the most useful tool we have for performing quantitative analysis of any type, but without an understanding of how travel works in our physical reality, it won’t provide a satisfactory solution to the paradox. The reason is simple: the paradox isn’t simply about dividing a finite thing up into an infinite number of parts, but rather about the inherently physical concept of a rate, and specifically of the rate of traversing a distance over a duration of time.

That’s why it’s a bit misleading to hear Zeno’s paradox: the paradox is usually posed in terms of distances alone, when it’s really about motion, which is about the amount of distance covered in a specific amount of time. The Greeks had a word for this concept — τάχος — which is where we get modern words like “tachometer” or even “tachyon” from, and it literally means the swiftness of something. To someone like Zeno, however, the concept of τάχος, which most closely equates to velocity, was only known in a qualitative sense. To connect the explicit relationship between distance and velocity, there must be a physical link at some level, and there indeed is: through time.

If anything moves at a constant velocity and you can figure out its velocity vector (magnitude and direction of its motion), you can easily come up with a relationship between distance and time: you will traverse a specific distance in a specific and finite amount of time, depending on what your velocity is. This can be calculated even for non-constant velocities by understanding and incorporating accelerations, as well, as determined by Newton.

Credit: Gordon Vigurs/English Wikipedia

Because we’re not bound by the qualitative thought patterns that would’ve come along with the mention of the word τάχος here in the 21st century, we can simply think about a variety of terms that we are familiar with.

  • How fast does something move? That’s what its speed is.
  • What happens if you don’t just ask how fast it’s moving, but if you add in which direction it’s moving in? Then that speed suddenly becomes a velocity: a speed plus a direction.
  • And what’s the quantitative definition of velocity, as it relates to distance and time? It’s the overall change in distance divided by the overall change in time.

This makes velocity an example of a concept known as a rate: the amount that one quantity (distance) changes dependent on how another quantity (time) changes as well. You can have a constant velocity (without any acceleration) or you can have a velocity that evolves with time (with either positive or negative acceleration). You can have an instantaneous velocity (i.e., where you measure your velocity at one specific moment in time) or an average velocity (i.e., your velocity over a certain interval, whether a part or the entirety of a journey).

However, if something is in constant motion, then the relationship between distance, velocity, and time becomes very simple: the distance you traverse is simply your velocity multiplied by the time you spend in motion. (Distance = velocity * time.)

When a person moves from one location to another, they are traveling a total amount of distance in a total amount of time. Figuring out the relationship between distance and time quantitatively did not happen until the time of Galileo and Newton, at which point Zeno’s famous paradox was resolved not by mathematics or logic or philosophy, but by a physical understanding of the Universe.

Credit: Public Domain

This is how we arrive at the first correct resolution of the classical “Zeno’s paradox” as commonly stated. The reason that objects can move from one location to another (i.e., travel a finite distance) in a finite amount of time is not because their velocities are not only always finite, but because they do not change in time unless acted upon by an outside force: a statement equivalent to Newton’s first law of motion. If you consider a person like Atalanta, who moves at a constant speed, you’ll find that she can cover any distance you choose in a specific amount of time: a time prescribed by the equation that relates distance to velocity, time = distance/velocity.

While this isn’t necessarily the most common form that we encounter Newton’s first law in (e.g., objects at rest remain at rest and objects in motion remain in constant motion unless acted on by an outside force), it very much arises from Newton’s principles when applied to the special case of constant motion. If you consider the total distance you need to travel and then halve the distance that you’re traveling, it will take you only half the time to traverse it as it would to traverse the full distance. To travel (½ + ¼ + ⅛ + …) the total distance you’re trying to cover, it takes you (½ + ¼ + ⅛ + …) the total amount of time to do so. And this works for any distance, no matter how arbitrarily tiny, you seek to cover.

Whether it’s a massive particle or a massless quantum of energy (like light) that’s moving, there’s a straightforward relationship between distance, velocity, and time. If you know how fast your object is going, and if it’s in constant motion, distance and time are directly proportional.

Credit: John D. Norton/University of Pittsburgh

For anyone interested in the physical world, this should be enough to resolve Zeno’s paradox. It works whether space (and time) is continuous or discrete; it works at both a classical level and a quantum level; it doesn’t rely on philosophical or logical assumptions. For objects that move in this Universe, simple Newtonian physics is completely sufficient to solve Zeno’s paradox.

But, as with most things in our classical Universe, if you go down to the quantum level, an entirely new paradox can emerge, known as the quantum Zeno effect. Certain physical phenomena only happen due to the quantum properties of matter and energy, like quantum tunneling through a thin and solid barrier, or the radioactive decay of an unstable atomic nucleus. In order to go from one quantum state to another, your quantum system needs to behave like a wave: with a wavefunction that spreads out over time.

Eventually, the wavefunction will have spread out sufficiently so that there will be a non-zero probability of winding up in a lower-energy quantum state. This is how you can wind up occupying a more energetically favorable state even when there isn’t a classical path that allows you to get there: through the process of quantum tunneling.

pulse light quantum tunnel barrier

By firing a pulse of light at a semi-transparent/semi-reflective thin medium, researchers can measure the time it must take for these photons to tunnel through the barrier to the other side. Although the step of tunneling itself may be instantaneous, the traveling particles are still limited by the speed of light.

Credit: J. Liang, L. Zhu & L.V. Wang, 2018, Light: Science & Applications

Remarkably, there’s a way to inhibit quantum tunneling, and in the most extreme scenario, to prevent it from occurring at all. All you need to do is this: just observe and/or measure the system you’re monitoring before the wavefunction can sufficiently spread out so that it overlaps with a lower-energy state it can occupy. Most physicists are keen to refer to this type of interaction as “collapsing the wavefunction,” as you’re basically causing whatever quantum system you’re measuring to act “particle-like” instead of “wave-like.” But a wavefunction collapse is just one interpretation of what’s happening, and the phenomenon of quantum tunneling is real regardless of your chosen interpretation of quantum physics.

Another — perhaps more general — way of looking at the quantum version of Zeno’s paradox is that you’re restricting the possible quantum states your system can be in through the act of observation and/or measurement. If you make this measurement too close in time to your prior measurement, there will be an infinitesimal (or even a zero) probability of tunneling into your desired state. If you keep your quantum system interacting with the environment, you can suppress the inherently quantum effects, leaving you with only the classical outcomes as possibilities: effectively forbidding quantum tunneling from occurring.

quantum tunneling

When a quantum particle approaches a barrier, it will most frequently interact with it. But there is a finite probability of not only reflecting off of the barrier, but tunneling through it. If you were to measure the position of the particle continuously, however, including upon its interaction with the barrier, this tunneling effect could be entirely suppressed via the quantum Zeno effect.

Credit: Yuvalr/Wikimedia Commons

The key takeaway is this: motion from one place to another is possible, and the reason it’s possible is because of the explicit physical relationship between distance, velocity, and time. With those relationships in hand — i.e., that the distance you traverse is your velocity multiplied by the duration of time you’re in motion — we can learn exactly how motion occurs in a quantitative sense. Yes, in order to cover the full distance from one location to another, you have to first cover half that distance, then half the remaining distance, then half of what’s left, etc., which would take an infinite number of “halving” steps before you actually reach your destination.

But the time it takes to take each “next step” also halves with respect to each prior step, so motion over a finite distance always takes a finite amount of time for any object in motion. Even today, the Zeno’s paradox puzzle still remains an interesting exercise for mathematicians and philosophers. Not only is the solution reliant on physics, but physicists have even extended it to quantum phenomena, where a new quantum Zeno effect — not a paradox, but a suppression of purely quantum effects — emerges.

For motion, its possibility, and how it occurs in our physical reality, mathematics alone is not enough to arrive at a satisfactory resolution. As in all scientific fields, only the Universe itself can actually be the final arbiter for how reality behaves. Thanks to physics, Zeno’s original paradox gets resolved, and now we can all finally understand exactly how.

A version of this article was first published in February of 2022. It was updated in January of 2026.

This article Zeno’s Paradox resolved by physics, not by math alone is featured on Big Think.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

God, I Cannot Wait to Overcomplicate This Spreadsheet

1 Share

Power on your PCs, my gentle users, because I just found a fresh Excel file to overcomplicate. Hoo boy, I can’t wait to rework every cell of “Company Staffing.xlsx.”

Most peons at this company think a spreadsheet is just a tool to create a budget. Not me. Not us. You see, there’s one of us in every organization. Though it’s nowhere in our job descriptions, we spend hours crafting Gordian knots of obscure Excel features so that even the simplest files become unrecognizable monstrosities.

Before we do anything with these measly kilobytes, we need to duplicate this file. Several times. Then we add an underscore, “NEW,” and a different numbering convention. The filename should evoke the image of an overbaked Feast of Assumption turducken.

There. We’re ready to open “Company Staffing_NEW_FINAL_003.xlsx.”

Sweet mother of Steve Ballmer, we have only three columns here: “Name,” “Hire date,” and “Salary.” Time to really balloon this “dataset.” With one well-spent afternoon, I can 5X this amateur foray into spreadsheet-making and split that puny “Name” column into “First name,” “Nickname,” “Mother’s maiden name,” “Middle name,” and “Surname.” Have to make some educated guesses for most of these values, of course. God, I obfuscate so much for this company.

These greenhorns are so lucky to have a power-user like me. No one asked, but I’m going to add a Pivot Table. Don’t know what that is? It’s just an advanced feature I learned from one of my many yellowed manuals that will make looking at this list feel like lifting Russian nesting dolls. Only under each doll lie increasingly larger horned dolls, with monospaced-font tattoos of VLOOKUP function incantations.

When I’m done, the whole thing will be the spreadsheet-equivalent of a Picasso. Except instead of having one crooked nose, you get twenty misshapen noses along with a bunch of unnecessary ways of sorting and filtering the noses.

Speaking of fine art, the newbie who gave me this canvas didn’t pick a theme. Holy guacamole, I am so excited to click on that “Layout” tab. I’m thinking fuchsia and teal zebra stripes for the row backgrounds. Ah, that’s better.

My coworkers are really going to be late when I fire this baby off in an email two minutes before our next all-hands meeting. They always need a lot of time to process my changes.

Ugh, the boss keeps asking to meet with me and HR. I wish she realized I was deep in the weeds, making this dim doc into Frankenstein’s monster of Excel. She doesn’t even realize that once I’m done, I’ll be the only one who can maintain this thing.

Geez, I almost forgot to freeze one of the columns for no reason. Let’s go with “Hire date.”

Almost done revamping the look of this number dump. But we need more columns. I yearn to see the triple alphanumeric cell name AAB:012 in all its glory.

If I play my cards right, people are going to have to scroll so far horizontally that their wrists cramp from dragging their way across the screen.

Whoa, I found another tab. “Planned Layoffs – Sheet 2.” Hey, why’s my severance pay “#VALUE!”?

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Stop anthropomorphizing lines of code.

1 Share

Elon Musk promised that his social media company X would be “the everything app,” but these days “everything” seems to only include slop, fascist propaganda, and abuse. Increasingly, the social media site has been awash in vulgar and non-consensual sexual images that users are creating with X’s built-in AI tool, Grok. As The Guardian’s Nick Robins-Early wrote:

Many users on X have prompted Grok to generate sexualized, nonconsensual AI-altered versions of images in recent days, in some cases removing people’s clothing without their consent. Musk on Thursday reposted an AI photo of himself in a bikini, captioned with cry-laughing emojis, in a nod to the trend.

And as 404 Media uncovered, the abuse this software is enabling is likely far worse than it appears and is in many ways merely the latest escalation of an online creep problem that’s as old as the internet.

It’s horrendous, from top to bottom, especially for women who are being aggressively targeted by X users just for existing online.

The writer Ketan Joshi picked up on a strange pattern of language and usage in the media coverage of this scandal. Joshi posted a thread on Bluesky gathering examples “of major media outlets falsely anthropomorphising the “Grok” chatbot program and in doing so, actively and directly removing responsibility and accountability from individual people working at X who created a child pornography generator.” The example headlines and articles Joshi found include phrases like “Grok apologizes,” “Grok says,” or “Elon Musk’s AI chatbot blames.” The articles go further in some cases, giving the software agency by quoting it as “writing,” “saying,” and “posting.”

The problem here, as Joshi wrote, is that this framing shifts responsibility away from the people who are using and platforming this software. Implying that the chatbot and image generation program itself is accountable allows people to hide from their own culpability in the bot’s shadow.

This has been a trend in how AI is discussed for a while. The media’s language and framing are often overly deferential to the tech industry’s own marketing hype—imagine blaming a toaster for a burned slice of multigrain just because a salesman assured you about the Bread Safe Smart Sensor™ technology. This tendency to assume that these programs are as capable as we’re being told isn’t unique to AI—think of “smart bombs”—but the trend in usage doesn’t seem to be getting any better.

The word “artificial” in AI is accurate, though. These programs are not natural, they’re human-made artifices conceived, created, and maintained by people. Allowing creators, engineers, and executives to evade accountability for their decisions, just because we imagine that the toasters they made are awake, will only degrade the internet further.

I think 2026 will be the nadir of social media. Without changes, these online platforms will be squeezed into more horrible and unpleasant forms by the pressures of AI maximimalists, extractive data miners, and fascistic supporters of a “clicktatorship” who care above all else about creating and curating displays of made-for-TV violence. A better internet is not impossible, though. We can name the people behind these problems, and we can do something about it.

The viral warning that “a computer can never be held accountable” from a 1979 IBM training document has never been more resonant. The problem with Grok and other programs isn’t that it’s escaped containment like Skynet, the problem is more akin to an owner who has let their aggressive dog off its leash.

People who live in a society with you and me are putting these tools to malicious uses. They are people who take time in their day to craft and share abusive images of kids and strangers, and who delight in the pain those images cause. They are people who post slop of themselves next to cry-laughing emojis, desperate to be the funny one for once. They are people who blew off meeting up with friends so they could stay up late into the night to program these tools, who got bored and zoned out in long meetings to discuss implementing this software, and who are right now ignoring texts about why they’re letting the platforms they’re responsible for flood with filth.

None of this is the toaster’s doing. We shouldn’t allow the marketers and their apologists let those who are really responsible avoid their time in the spotlight.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Distractions That Interrupt Classroom Teaching and Learning (Tony Riehl)

1 Share

Reducing classroom distractions during a lesson is essential to any definition of effective teaching, much less student learning. With cell phones ubiquitous among students, distractions multiply. What, for example, do some teachers do before or during a lesson to manage cell phone use?

Veteran math teacher Tony Riehl wrote a post on this subject. It appeared May 22, 2017 . He has taught high school math courses in Montana for 35 years. I added math teaching blogger Dan Meyer’s comments on Riehl’s post.

I learned early on with cell phones, that when you ask a student to hand you their phone, it very often becomes confrontational. A cell phone is a very personal item for some people.

To avoid the confrontation I created a “distraction box” and lumped cell phones in with the many other distraction that students bring to class. These items have changed over time, but include “fast food” toys, bouncy balls, Rubics cubes, bobble heads, magic cards, and the hot item now are the fidget cubes and fidget spinners.

A distraction could be a distraction to the individual student, the other students or even a distraction to me. On the first day of the year I explain to my students that if I make eye contact with them and point to the distraction box, they have a choice to make. If they smile and put the item in the box, they can take the item out of the box on the way out of the room. If they throw a fit and put the distraction in the box, they can have it back at the end of the day. If they refuse to put the distraction in the box, they go to the office with the distraction.

On the first day of the year we even practice smiling while we put an item in the box. The interaction is always kept very light and the students really are cooperative. It has been a few years since an interaction actually became confrontational, because I am not asking them to put the item in my hand. I even have students sometimes put their cell phone in the box on the way in the door because they know they are going to have trouble staying focused.

This distraction box concept really has changed the atmosphere of my room. Students understand what a distraction is and why we need to limit distractions….

This Is My Favorite Cell Phone Policy

By Dan Meyer • May 24, 2017

Schools around the world are struggling to integrate modern technology like cell phones into existing instructional routines. Their stances towards that technology range from total proscription – no cell phones allowed from first bell to last – to unlimited usage. Both of those policies seem misguided to me for the same reason: they don’t offer students help, coaching, or feedback in the complex skills of focus and self-regulation.

Enter Tony Riehl’s cell phone policy, which I love for many reasons, not least of which because it isn’t exclusively a cell phone policy. It’s a distractions policy.

What Tony’s “distraction box” does very well:

  • It makes the positive statement that “we’re in class to work with as few distractions as possible.” It isn’t a negative statement about any particular distraction. Great mission statement.
  • Specifically, it doesn’t single out cell phones. The reality is that cell phones are only one kind of technology students will bring to school, and digital technology is only one distractor out of many. Tony notes that “these items have changed over time, but include fast food toys, bouncy balls, Rubik’s cubes, bobble heads, magic cards, and the hot items now are the fidget cubes and fidget spinners.”
  • It acknowledges differences between students. What distracts you might not distract me. My cell phone distracts my learning so it goes in the box. Your cell phone helps you learn so it stays on your desk.
  • It builds rather than erodes the relationship between teachers and students. Cell phone policies often encourage teachers to become detectives and students to learn to evade them. None of this does any good for the working relationship between teachers and students. Meanwhile, Tony describes a policy that has “changed the atmosphere of my room,” a policy in which students and teachers are mutually respected and mutually invested.


Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete
Next Page of Stories