1335 stories
·
1 follower

How your high school affects your chances of UC Admission

1 Share

The last post looked at how geographic biases affect who applies to and gets accepted by UCs. This post is going to going to drill in to how the admissions policy has evolved at one specific campus, UC San Diego, which has been in the news because 8.5% of enrolled freshmen needed remedial math classes1.

In 2024, Lynbrook High in San Jose was the highest-achieving non-selective high school in the state. 375 seniors (86% of the total) were proficient in both Math and English but only 37 were admitted to UC San Diego. On the other hand, Crawford High, in San Diego, had 38 students admitted even though only 23 (8.6% of the 266 seniors) were proficient in both Math and English. There are literally hundreds of students from Lynbrook who were rejected by UC San Diego despite being stronger than most of those accepted from Crawford.

Thanks for reading SFEDup! Subscribe for free to receive new posts and support my work.

In the popular imagination, we expect students from high achieving schools to be more successful in college admissions. This is no longer true, at least at UC San Diego. The relationship between the academic strength of the students at a school, as measured by the percentage who are proficient in both English and Math, and the chance that those students get admitted to UCSD is extremely low.

As the chart shows, UCSD seems to favor some schools at the expense of others with similar achievement levels. The dot in the top right is CAMS (the California Academy of Maths and Sciences). It’s a selective school where over 90% of seniors met or exceeded the standards in both English and Math and nearly 40% of seniors were admitted to UC San Diego. But Gretchen Whitney and Oxford Academy are also selective schools where over 90% of students were proficient in both English and Math and UCSD admitted fewer than 20% of them. Meanwhile, students from Berkeley High have much greater success than students at most other Bay Area schools despite not being better than them in any measurable way.

Preuss, Gompers, and Crawford are all in the San Diego area (Preuss is on the UCSD campus itself) and all had far more seniors admitted than would be expected based on their academic proficiency. One reason is that far more of their students actually applied than would be expected2. You can’t win a lottery unless you buy a ticket and the kids at these schools bought more tickets to the UCSD admissions lottery.

Competing With Your Classmates

Naturally, schools with lots of strong students produce lots of college applicants. If students were being evaluated primarily on the basis of their individual accomplishments or essays, an applicant’s chances of admission would not be affected by the strength of his or her classmates. Unfortunately, if you are at a school where lots of your classmates are applying to UCSD, your chances of admission are greatly hurt.

Over the three years from 2022 to 2024, UCSD admitted about 26% of all public school applicants. That hides a huge variation. Schools that produce fewer than 25 applicants per year have an average admission rate over 40%. At the 63 schools that produce more than 200 applicants per year, the admission rate was only 18%. Students from those schools are effectively competing with their classmates for a limited number of spots. UCSD just does not want more of them, however good they may be. It may or may not be a coincidence that Asian students are the largest group of applicants at 55 of the 63 schools that produce more than 200 applicants annually.

Incidentally, San Francisco public schools had a combined admission rate of just under 20%, well below the state average. Mission had the highest rate (26.5%) and Balboa the lowest (15.4%). It may or may not be a coincidence that 90% (the most of any SF school) of the applicants from Balboa were Asian whereas only 25% (the fewest of any SF school) of the applicants from Mission were Asian.

Admissions Standards for Private Schools

The limited data we have on the comparative strength of public and private school applicants suggests that private school applicants are likely to be objectively stronger3. Nevertheless, the average private school applicant had only a 18.3% chance of admission, well below the 25.8% average for public school applicants. That seven percentage point difference translates into a 40% greater chance of admission for public school applicants. Whereas nearly 50% (440 out of 906) of public schools had admission rates over 30%, only 2% of private schools did (3 out of 142)4.

Among San Francisco private schools, the highest admission rate (28%) belonged to Immaculate Conception Academy, almost certainly the least celebrated and least selective of all the private high schools in the city. Archbishop Riordan had a higher admission rate than its near neighbor, Lick-Wilmerding, or its more selective peers, St Ignatius and Sacred Heart.

The lowest admission rate of all belongs to one of the most celebrated private schools in San Jose. Bellarmine College Prep has seen only 7.9% of its applicants accepted by UCSD over the last three years. In 2022, only 6 of the 167 applicants were successful, although that rose to 13 and 19 over the next two years, for a total of 38 over the three years. Over the same period, 39 Bellarmine students were named National Merit Scholar semifinalists, an achievement that requires scoring in the top 1% of students statewide on the PSAT. For Bellarmine students, it’s harder to be get into UC San Diego than to be in the top 1% in the state.

This phenomenon, of more students in the top 1% than are admitted to UC San Diego, is not confined to Bellarmine. It’s true of a number of other private schools I looked at, such as Harvard-Westlake in LA, College Prep in Oakland, Nueva in San Mateo etc. The aforementioned Lynbrook High in San Jose was the only public school I could find with the same sorry distinction.

It makes no sense to think of UCSD evaluating applicants against all the other applicants in the applicant pool. It is clear that different standards are being applied depending on the school that the applicant attended. It makes more sense to think of applicants as competing with their classmates.

What Really Matters In Admission

The principle behind California’s Local Control Funding Formula (LCFF) is that extra money should be given to schools with lots of high-needs students (defined as those eligible for free or reduced price meals, English learners and those in foster care). The Unduplicated5 Pupil Percentage (UPP) measures the share of a school’s total enrollment that falls into one of those three high-needs categories.

For the sake of perspective, most public high schools in San Francisco have UPPs in the 50%-75% range. SOTA is the only one under 25% while both Lowell and Gateway are under 50%. The only schools above 75% are KIPP and International (which is filled with new arrivals who are still English Learners). Yes, even Mission, O’Connell, and Jordan have UPPs under 75%.

Schools with a UPP of 75% or higher are known as LCFF+ Schools. The California legislature has, since 2017, required that UCs report annually on the number of students from LCFF+ schools who apply to, are admitted to, and enroll at each UC campus. Though not required by law to do so, UC San Diego has decided to give students from such schools a big boost in admissions.

This chart shows the average admission rate by UPP over the period 2022-24. There is a very clear discontinuity. Most of the schools with UPPs above 75% have admission rates above 40% while most of the schools with UPPs below 75% have admission rates well below the UCSD average.

I experimented with a little regression analysis to see what factors might predict a school’s admission rate. As you might expect from the graph above, whether or not a school is a LCFF+ school6 is by far the most important factor. The school’s academic quality, as measured by the share of seniors who were proficient in both Math and English, was also significant but, here’s the kicker, the co-efficient was negative. In other words, given two schools that are both LCFF+, the one with the lower proficiency levels will have the higher admission rate7.

A couple of other points:

  • The preference applies at the school level, not at the applicant level. The non high-needs students in the LCFF+ schools will be the biggest beneficiaries because they are likely to be stronger students than their high-needs classmates. Conversely, about 40% of high-needs students are in non-LCFF+ schools and they are much less likely to gain admission because they will appear weaker than their classmates, even if they are stronger than high-needs students in LCFF+ schools.

  • The fraction of high school students that fall into one or more of the high-needs categories statewide was in the 57%-59% range every year between 2016-17 and 2020-21. It has since risen steadily and reached 65% in 2024-258. As recently as 2021, 35% of high schoolers were in LCFF+ schools. In 2025, it was 45%.

What Has Changed?

It wasn’t always this way. As recently as 2016, a school’s UPP did not affect its average admission rate9.

The number of applications that UC San Diego receives annually from California high school students rose by a huge 57% between 2016 and 2024. All the other UCs also experienced their own jumps in applications. The number of applications had been rising before the pandemic but the elimination of the SAT gave it another boost. Early last year, we looked at how the strength of the applicants had changed over time and found that, while applicants are taking more and more honors classes, the biggest increase in applications came from weak students i.e. those who had taken fewer than five honors classes. That does not mean UCSD is getting disproportionately more applications from LCFF+ high schools. The mix of applications by high school has been surprisingly stable. The share of applications coming from LCFF+ high schools has consistently been in the 17%-19% range and the share coming from high schools with UPPs under 25% has consistently been in the 23%-25% range. The UCs received money from the legislature to grow the number of applications from LCFF+ high schools. That their share of applications hasn’t increased demonstrates that the money was not well spent.

However, the different admission standards that are being applied today has had a significant effect on the composition of the admitted class. Students from LCFF+ high schools used to comprise around 20% of the admitted class. They now comprise over 30%. Meanwhile, the share of admits coming from schools with UPPs below 25% fell by one third, from 24% to 16%.

Winners and Losers

Such a dramatic change in the admissions policy has obviously produced winners and losers. Given the increase in the total number of admits, an increase in the number of admits from LCFF+ schools could have been accomplished without reducing the number of admits from any high school. Nevertheless, hundreds of schools did see a decrease in the number of students admitted to UCSD.

Over the period 2016-18, Lowell High in San Francisco had an average of 115 students admitted to UC San Diego, more than any other high school10. In 2022-24, it had an average of 94 admits per year, a decline of 21 (18%). It was far from the worst affected. Torrey Pines High in San Diego went from an average of 100 admits to an average of just 42, a decline of 58. Monta Vista High in Cupertino went from 104 admits per year down to 48, a decline of 56. Four other high schools also saw their average number of admits decline by 50 or more.

Some schools with lots of high needs students also saw their number of admits decline. Highlighted on the chart are San Gabriel High and Gabrielino High, both of which saw their admits halve from around 50 to around 25 per year. It may or may not be a coincidence that at least 85% of UCSD applicants from both schools were Asian. Evidence that it may be a coincidence is that Bolsa Grande High, another high needs school where 85%+ of the applicants are Asian, saw a big increase in its number of admits.

On the flip side, four schools saw their number of admits increase by at least 50. Granada Hills Charter is now the largest source of admits to UC San Diego, averaging 151 in 2022-24, up from 94 in 2016-18. There were actually a massive 196 admits from Granada Hills in 2024 alone: the average is 151 only because it includes the 74 admits in 2022. Similarly, Eleanor Roosevelt High’s average of 105 is a combination of the 39 admits it had in 2022 and the 147 it had in 2024. Berkeley High saw the biggest increase in absolute terms, up from 50 admits per year to 125. Meanwhile, Birmingham Community Charter went from 18.3 to 71.7, an increase of 291%.

These four schools are all enormous. Berkeley High is the smallest and it has around 800 seniors. Granada Hills has over 1100 seniors. While they are all good schools with lots of strong students, it is unclear why they are so favored by UCSD admissions. Only Birmingham Community Charter is an LCFF+ school. In 2024, sixty-eight high schools produced more graduates who were proficient in both English and Math than Berkeley High. Eleanor Roosevelt’s big increase may have been helped by a conveniently large increase in its UPP. It was between 35% and 40% every year up to 2021 but jumped to 59%, 67%, and 67% in the next three years.

Where do all the students needing remedial help come from?

The big increase in the number of students admitted from LCFF+ high schools coincides exactly with the big increase in students needing remedial help. It is natural, but wrong, to conclude that the former was sufficient to cause the latter.

There are plenty of smart students at LCFF+ high schools. There are more students at those schools who are proficient in both Math and English than apply to UCSD. UCSD accepts about 40% of LCFF+ applicants so, even though every proficient student doesn’t apply, the number of proficient applicants from LCFF+ high schools probably exceeds the number of students admitted from those schools. It should thus be possible to keep the same number of admits but have zero who need remedial classes because all the admits are proficient in both Math and English. That’s not what happens in practice, however, because the UCSD admissions office has no way to know who those proficient students are. Rampant grade inflation means that high school transcripts and GPAs are unable to distinguish strong students from weak. Meanwhile, the UCs have blinded themselves by ignoring SAT scores and paying scant attention to AP exam scores. If you’re picking blindly from all the applicants from Lynbrook High, you’ll probably pick a very strong student because Lynbrook is filled with very strong students. If you’re picking blindly from all the applicants from Crawford High, you may get a competent student but you’re more likely to get someone needing remedial help.

Conclusion

The opaqueness of the whole process is what I find most objectionable. UC San Diego has dramatically changed its process for how it evaluates applicants but there is no public record of it announcing this dramatic change or describing what its new evaluation rubric is or how it has changed. Students and their families are supposed to trust in the integrity of the holistic review process even though it produces results that appear unjustified using publicly available data.

The UCs receive nearly $5 billion in state funding and receive applications from over 100,000 high school seniors every year. Other government-run programs have clear eligibility rules. We deserve a proper accounting of how each UC makes its admissions decisions.

1

There are also practical reasons to pick UCSD as the focus. I want to analyze the changing admission rates by high school. UC only publishes admissions data for high schools that have at least three students admitted to a campus. Many more schools have three or more students admitted to UCSD than to the more selective UCLA or UC Berkeley. On the other hand, UC San Diego is selective enough that it (unlike Riverside and Merced) gets applications from many of the strongest students from each high school. Its admissions results therefore reflect the choices of the admissions office rather than the lottery of who applies.

2

The prediction was based on a linear model with two independent variables, both of which were statistically significant: the percentage of seniors who were proficient in both Math and English; and the school’s Unduplicated Pupil Percentage (see later). I ranked all the high schools by how much the actual number of applications exceeded the predicted number. Of the 1,154 high schools in the dataset, Preuss, Gompers, and Crawford ranked 1, 2, and 30 respectively.

3

In 2019, the last year for which we have this data, private school students passed (i.e. scored 3 or higher) 72% of the AP exams they took while public school students passed 59% of their AP exams.

4

Curiously, two of the three with admission rates over 30% were Armenian schools and they were the only two Armenian schools in the data. You’d get some odds on that occurring by random chance.

5

“Unduplicated” means students are counted only once even if they fall into more than one category, for example, by being an English learner eligible for free meals.

6

I’m no statistics expert but I’m pretty sure it would be a mistake to use the raw UPP in a linear regression model because the discontinuity proves the relationship is non-linear.

7

The geographic location of the high school was also statistically significant at the 95% level and in the expected direction (i.e. positive for schools on the South Coast; negative for schools in the Bay Area) but it improved the R-squared by a tiny fraction (from 0.635 to 0.638). I had a theory that maybe schools that had few applicants would be favored over schools that produced lots of applicants but this turned out not to be significant.

8

Why it has increased so rapidly is an interesting question. I have trouble believing that it is a natural increase. The economy has been doing okay. There has not been an increase in the number of English learners. And yet the share of high needs students in Napa County has apparently risen from 44% in 2019-20 to 65% in 2024-25. I have heard that the introduction of universal school meals might have mucked up the data quality but, if that is true, I don’t know what the mechanism is. If anyone can explain, I would love to be enlightened.

9

An equal admission rate does not imply that applicants from all schools were treated the same back then. The average applicant from a low UPP school or a private school was probably stronger than the average applicant from a high UPP school so an equal admission rate probably reflects a boost for applicants from high UPP schools. But the boost, if it existed, was much smaller than the one in place today.

10

Other schools have higher average scores but, in many years, Lowell has the most students who are proficient in both Math and English on the SBAC and the most students who pass two or more AP exams.



Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

I replaced Windows with Linux and everything’s going great

1 Share
Screenshot of a Windows-XP-looking theme for KDE Plasma.
Is this… bliss? | Screenshot: Nathan Edwards / The Verge

Greetings from the year of Linux on my desktop.

In November, I got fed up and said screw it, I'm installing Linux. Since that article was published, I have dealt with one minor catastrophe after another. None of that has anything to do with Linux, mind you. It just meant I didn't install it on my desktop until Sunday evening.

My goal here is to see how far I can get using Linux as my main OS without spending a ton of time futzing with it - or even much time researching beforehand. I am not looking for more high-maintenance hobbies at this stage. I want to see if Linux is a wingable alternative to Microsoft's increasingly annoying OS.

Ho …

Read the full story at The Verge.

Read the whole story
mrmarchant
3 hours ago
reply
Share this story
Delete

What a for-credit poker class at MIT reveals about decision-making under uncertainty

1 Share

In 2012, MIT launched a new course that would turn out to be extremely popular. The official title was 15.S50, but everybody came to know it as the ‘MIT poker class’.

The course was the brainchild of PhD student Will Ma, who was studying Operations Research. He had played a lot of poker – and won a lot of money – while an undergraduate in Canada, which had made a lot of people curious when he arrived at MIT, including the head of department. 15.S50 was a legitimate MIT class; if students passed, they could get degree credit.

I’ve always liked poker, both as a game and testing ground for wider ideas relating to uncertainty. Although poker has rules and limits, some key information is always concealed. The same problem crops up in many aspects of life. Negotiations, debates, bargaining; they are all incomplete information games. ‘Poker is a perfect microcosm of many situations we encounter in the real world,’ as Jonathan Schaeffer, a pioneer of game playing AI, once told me.

I interviewed Ma about the MIT poker class while researching The Perfect Bet. Perhaps understandably for a university course, the sessions focused on theory rather than gambling with real money. Given the relatively limited time available, Ma tried to focus on the steepest part of the learning curve.

Here are some of the key takeaways when it comes to decision-making under uncertainty:

  1. Sometimes the optimal move feels reckless. To succeed in poker, you must be prepared to occasionally take a big calculated risk, or walk away from a situation you’ve sunk resources into. The best actions are often more extreme than a beginner is comfortable with.

  2. Confidence is about recovering from mistakes. In one class, lecturer Jennifer Shahade pointed out that ‘confidence is an underrated ingredient to success in chess and poker’. In particular, confident players expect to make errors and move on rather than let one mistake affect their game.

  3. Most people lose by doing too much. When starting out in poker, it can be easy to get bored with folding and instead play too many hands, when you should be watching and learning about your opponents without taking unnecessary risks.

  4. Your decisions change how others play against you. Shahade noted that in chess, ‘most good players forget who they are playing and focus on the position’. But in poker, ‘monitoring your own image and that of other players is a huge part of the game’.

  5. Judge decisions by how they were made, not how they turned out. As Ma put it, ‘I think one of the things poker teaches you very well is that you can often make a good decision but not get a good result, or make a bad decision and get a good result.’

Thanks for reading Understanding the unseen! This post is public so feel free to share it.

Share


Cover image: Michał Parzuchowski



Read the whole story
mrmarchant
4 hours ago
reply
Share this story
Delete

See it with your lying ears

1 Share

For the past couple of weeks, I couldn’t shake off an intrusive thought: raster graphics and audio files are awfully similar — they’re sequences of analog measurements — so what would happen if we apply the same transformations to both?…

Let’s start with downsampling: what if we divide the data stream into buckets of n samples each, and then map the entire bucket to a single, averaged value?

for (pos = 0; pos < len; pos = win_size) {
    
  float sum = 0;
  for (int i = 0; i < win_size; i++) sum += buf[pos + i];
  for (int i = 0; i < win_size; i++) buf[pos + i] = sum / win_size;

}

For images, the result is aesthetically pleasing pixel art. But if we do the same audio… well, put your headphones on, you’re in for a treat:

The model for the images is our dog, Skye. The song fragment is a cover of “It Must Have Been Love” performed by Effie Passero.

If you’re familiar with audio formats, you might’ve expected this to sound different: a muffled but neutral rendition associated with low sample rates. Yet, the result of the “audio pixelation” filter is different: it adds unpleasant, metallic-sounding overtones. The culprit is the stairstep pattern in the resulting waveform:

Not great, not terrible.

Our eyes don’t mind the pattern on the computer screen, but the cochlea is a complex mechanical structure that doesn’t measure sound pressure levels per se; instead, it has clusters of different nerve cells sensitive to different sine-wave frequencies. Abrupt jumps in the waveform are perceived as wideband noise that wasn’t present in the original audio stream.

The problem is easy to solve: we can run the jagged waveform through a rolling-average filter, the equivalent of blurring the pixelated image to remove the artifacts:

But this brings up another question: is the effect similar if we keep the original 44.1 kHz sample rate but reduce the bit depth of each sample in the file?

/* Assumes signed int16_t buffer, produces n + 1 levels for even n. */

for (int i = 0; i < len; i++) {

  int div = 32767 / (levels / 2);
  buf[i] = round(((float)buf[i]) / div) * div;

}

The answer is yes and no: because the frequency of the injected errors will be on average much higher, we get hiss instead of squeals:

Also note that the loss of fidelity is far more rapid for audio than for quantized images!

As for the hiss itself, it’s inherent to any attempt to play back quantized audio; it’s why digital-to-analog converters in your computer and audio gear typically need to incorporate some form of lowpass filtering. Your sound card has that, but we injected errors greater than what the circuitry was designed to mask.

But enough with image filters that ruin audio: we can also try some audio filters that ruin images! Let’s start by adding a slightly delayed and attenuated copy of the data stream to itself:

for (int i = shift; i < len; i++)
  buf[i] = (5 * buf[i] + 4 * buf[i - shift]) / 9;

Check it out:

For photos, small offsets result in an unappealing blur, while large offsets produce a weird “double exposure” look. For audio, the approach gives birth to a large and important family of filters. Small delays give the impression of a live performance in a small room; large delays sound like an echo in a large hall. Phase-shifted signals create effects such as “flanger” or “phaser”, a pitch-shifted echo sounds like a chorus, and so on.

So far, we’ve been working in the time domain, but we can also analyze data in the frequency domain; any finite signal can be deconstructed into a sum of sine waves with different amplitudes, phases, and frequency. The two most common conversion methods are the discrete Fourier transform and the discrete cosine transform, but there are more wacky options to choose from if you’re so inclined.

For images, the frequency-domain view is rarely used for editing because almost all changes tend to produce visual artifacts; the technique is used for compression, feature detection, and noise removal, but not much more; it can be used for sharpening or blurring images, but there are easier ways of doing it without FFT.

For audio, the story is different. For example, the approach makes it fairly easy to build vocoders that modulate the output from other instruments to resemble human speech, or to develop systems such as Auto-Tune, which make out-of-tune singing sound passable.

In the earlier article, I shared a simple implementation of the fast Fourier transform (FFT) in C:

void __fft_int(complex* buf, complex* tmp, 
               const uint32_t len, const uint32_t step) {

  if (step >= len) return;
  __fft_int(tmp, buf, len, step * 2);
  __fft_int(tmp + step, buf + step, len, step * 2);

  for (uint32_t pos = 0; pos < len; pos += 2 * step) {
    complex t = cexp(-I * M_PI * pos / len) * tmp[pos + step];
    buf[pos / 2] = tmp[pos] + t;
    buf[(pos + len) / 2] = tmp[pos] - t;
  }

}

void in_place_fft(complex* buf, const uint32_t len) {
  complex tmp[len];
  memcpy(tmp, buf, sizeof(tmp));
  __fft_int(buf, tmp, len, 1);
} 

Unfortunately, the transform gives us decent output only if the input buffer contains nearly-steady signals; the more change there is in the analysis window, the more smeared and intelligible the frequency-domain image. This means we can’t just take the entire song, run it through the aforementioned C function, and expect useful results.

Instead, we need to chop up the track into small slices, typically somewhere around 20-100 ms. This is long enough for each slice to contain a reasonable number of samples, but short enough to more or less represent a momentary “steady state” of the underlying waveform.

An example of FFT windowing.

If we run the FFT function on each of these windows separately, each output will tell us about the distribution frequencies in that time slice; we can also string these outputs together into a spectrogram, plotting how frequencies (vertical axis) change over time (horizontal axis):

Audio waveform (top) and its FFT spectrogram view.

Alas, the method isn’t conductive to audio editing: if we make separate frequency-domain changes to each window and then convert the data back to the time domain, there’s no guarantee that the tail end of the reconstituted waveform for window n will still line up perfectly with the front of the waveform for window n + 1. We’re likely to end up with clicks and other audible artifacts where the FFT windows meet.

A clever solution to the problem is to use the Hann function for for windowing. In essence, we multiply the waveform in every time slice by the value of y = sin2(t), where t is scaled so that each window begins at zero and ends at t = π. This yields a sinusoidal shape that has a value of zero near the edges of the buffer and peaks at 1 in the middle:

The Hann function for FFT windows.

It’s hard to see how this would help: the consequence of the operation is that the input waveform is attenuated by a repeating sinusoidal pattern, and the same attenuation will carry over to the reconstituted waveform after FFT.

The trick is to also calculate another sequence of “halfway” FFT windows of the same size that overlap the existing ones (second row below):

Adding a second set of FFT windows.

This leaves us with one output waveform that’s attenuated in accordance with the repeating sin2 pattern that starts at the beginning of the clip, and another waveform that’s attenuated by an identical sin2 pattern shifted one-half of the cycle. The second pattern can be also written as cos2.

With this in mind, we can write the equations for the reconstituted waveforms as:

If we sum these waveforms, we get:

This is where we wheel out the Pythagorean identity, an easily-derived rule that tells us that the following must hold for any x:

In effect, the multiplier in the earlier equation for the summed waveform is always 1; the Hann-induced attenuation cancels out.

At the same time, because the signal at the edges of each FFT window is attenuated to zero, we get rid of the waveform-merging discontinuities. Instead, the transitions between windows are gradual and mask any editing artifacts.

Where was I going with this? Ah, right! With this trick up of our sleeve, we can goof around in the frequency domain to — for example — selectively shift the pitch of the vocals in our clip:

Source code for the effect is available here. It’s short and easy to experiment with.

I also spent some time approximating the transform for the dog image. In the first instance, some low-frequency components are shifted to higher FFT bins, causing spurious additional edges to crop up and making Skye look jittery. In the second instance, the bins are moved in the other direction, producing a distinctive type of blur.

PS. Before I get hate mail from DSP folks, I should note that high-quality pitch shifting is usually done in a more complex way. For example, many systems actively track the dominant frequency of the vocal track and add correction for voiceless consonants such as “s”. If you want to down a massive rabbit hole, this text is a pretty accessible summary.

As for the 20 minutes spent reading this article, you’re not getting that back.

Subscribe now



Read the whole story
mrmarchant
19 hours ago
reply
Share this story
Delete

How Did TVs Get So Cheap?

1 Share

You’ve probably seen this famous graph that breaks out various categories of inflation, showing labor-intensive services getting more expensive during the 21st century and manufactured goods getting less expensive.

One of the standout items is TVs, which have fallen in price more than any other major category on the chart. TVs have gotten so cheap that they’re vastly cheaper than 25 years ago even before adjusting for inflation. In 2001, Best Buy was selling a 50 inch big screen TV on Black Friday for $1100. Today a TV that size will set you back less than $200.

Via Amazon.

The plot below shows the price of TVs across Best Buy’s Black Friday ads for the last 25 years. The units are “dollars per area-pixel”: price divided by screen area times the number of pixels (normalized so that standard definition = 1). This is to account for the fact that bigger, higher resolution TVs are more expensive. You can see that, in line with the inflation chart, the price per area-pixel has fallen by more than 90%.

This has prompted folks to wonder, how exactly did a complex manufactured good like the TV get so incredibly cheap?

It was somewhat more difficult than I expected to suss out how TV manufacturing has gotten more efficient over time, possibly because the industry is highly secretive. Nonetheless, I was able to piece together what some of the major drivers of TV cost reduction over the last several decades have been. In short, every major efficiency improving mechanism that I identify in my book is on display when it comes to TV manufacturing.

How an LCD TV works

Since 2000, the story of TVs falling in price is largely the story of liquid crystal display (LCD) TVs going from a niche, expensive technology to a mass-produced and inexpensive one. As late as 2004, LCDs were just 5% of the TV market; by 2018, they were more than 95% of it.

Liquid crystals are molecules that, as their name suggests, form regular, repetitive arrangements (like crystals) even as they remain a liquid. They exhibit two other important characteristics that together can be used to construct a display. First, the molecules can be made to change their orientation when an electric field is applied to them. Second, if polarized light (light oscillating within a single plane) passes through a liquid crystal, its plane of polarization will rotate, with the amount of rotation depending on the orientation of the liquid crystal.

Liquid crystal rotating the plane of polarization of light, via Chemistry LibreTexts.

LCD screens use these phenomena to build a display. Each pixel in an LCD TV contains three cells, which are each filled with liquid crystal and have either a red, green, or blue color filter. Light from behind the screen (provided by a backlight) first passes through a polarizing filter, blocking all light except light within a particular plane. This light then passes through the liquid crystal, altering the light’s plane of polarization, and then through the color filter, which only allows red, green, or blue light to pass. It then passes through another polarizing filter at a perpendicular orientation to the first. This last filter will let different amounts of light through, depending on how much its plane of polarization has been rotated. The result is an array of pixels with varying degrees of red, blue, and green light, which collectively make up the display.

Structure of an LCD screen, via Nano Banana.

On modern LCD TVs, the liquid crystals are combined together with a bunch of semiconductor technology. The backlight is provided by light emitting diodes (LEDs), and the electric field to rotate the liquid crystal within each cell is controlled by a thin-film transistor (TFT) built up directly on the glass surface.

Some LCD screens, known as QLED, use quantum dots in the backlight to provide better picture quality, but these otherwise work very similarly to traditional LCD screens. There are also other types of display technology used for TVs, such as organic LEDs (OLED), that don’t use liquid crystal at all, but today these are still a small (but rising) fraction of total TV sales.

It took decades for LCDs to become the primary technology used for TV screens. LCDs first found use in the 1970s in calculators, then other small electronic devices, then watches. By the 1980s they were being used for small portable TV screens, and then for laptop and computer screens. By the mid-1990s LCDs were displacing cathode ray tube (CRT) computer monitors, and by the early 2000s were being used for larger TVs.

Steadily falling LCD TV Cost

When LCD TVs first appeared, they were an expensive, luxury product. In this 2003 Black Friday ad, Best Buy is selling a 20 inch LCD TV (with “dazzling 640x480 resolution”) for $800. The same ad has a 27 inch CRT TV on sale for $150. (I remember wanting to buy an LCD TV when I went to college in 2003, but settling for a much cheaper CRT).

How did the cost of LCD TVs come down?

LCD TVs start life as a large sheet of extremely clear glass, known as “mother glass”, manufactured by companies like Corning. Layers of semiconductor material are deposited onto this glass and selectively etched away using photolithography, producing the thin film transistors that will be used to control the individual pixels. Once the transistors have been made, the liquid crystal is deposited into individual cells, and the color filter (built up on a separate sheet of glass) is attached. The mother glass is then cut into individual panels, and the rest of the components — polarizing filters, circuit boards, backlights — are added.

LCD manufacturing process, via RJY Display.

A key aspect of this process is that many manufacturing steps are performed on the large sheets of mother glass, before it’s been cut into individual display panels. And over time, these mother glass sheets have gotten larger and larger. The first “Generation 1” sheets of mother glass were around 12 inches by 16 inches. Today, Generation 10.5 mother glass sheets are 116 by 133 inches, nearly 100 times as large.

Scaling up the size of mother glass sheets has been a major challenge. The larger the sheet of glass, and the larger the size of the display being cut from it, the more important it becomes to eliminate defects and impurities. As a result, manufacturers have had to find ways to keep very large surfaces pristine — LCDs today are manufactured in cleanroom conditions. And larger sheets of glass are more difficult to move. Corning built a mother glass plant right next to a Sharp LCD plant to avoid transportation bottlenecks and allow for increasingly large sheets of mother glass.

However, there are substantial benefits to using larger sheets of glass. Due to geometric scaling effects, it’s more efficient to manufacture LCDs from larger sheets of mother glass, as the cost of the manufacturing equipment rises more slowly than the area of the glass panel. Going from Gen 4 to Gen 5 mother glass sheets reduced the cost per diagonal inch of LCD displays by 50%. From Gen 4 to Gen 8, the equipment costs per unit of LCD panel area fell by 80%. Mother glass scaling effects have, as I understand it, been the largest driver of LCD cost declines.

Via Corning.

LCDs have thus followed a similar path to semiconductor manufacturing, where an important driver of cost reduction has been manufacturers using larger and larger silicon wafers over time. In fact, sheets of mother glass have grown in size much faster than silicon wafers for semiconductor manufacturing:

Via Corning.

LCDs are thus an interesting example of costs falling due to the use of larger and larger batch sizes. Several decades of lean manufacturing and business schools assigning “The Goal” have convinced many folks that you should always aim to reduce batch size, and that the ideal manufacturing process is “one piece flow” where you’re processing a single unit at a time. But as we see in several processes — semiconductor manufacturing, LCD production, container shipping — increasing your batch size can, depending on the nature of your process, result in substantial cost savings.

At the same time, we do see a tendency towards one piece flow at the level of mother glass panels. Early LCD fabs would bundle mother glass sheets together into cassettes, and then move those cassettes through subsequent steps of the manufacturing process. Modern LCD fabs use something much closer to a continuous process, where individual sheets of mother glass move through the process one at a time.

Outside of larger and larger sheets of mother glass, there have been numerous other technology and process improvements that have allowed LCD costs to fall:

  • “Cluster” plasma-enhanced chemical vapor deposition (PECVD) machines were developed in the 1990s for depositing thin film transistor materials. These machines were much faster, and required much less maintenance, than previous machines.

  • Manufacturers have found ways to reduce the number of process steps required to create thin-film transistors. Early operations required eight separate masking steps to build up the transistors. This was eventually reduced to four.

  • Thanks to innovations like moving manufacturing operations into cleanrooms and replacing manual labor with robots, yields have improved. Early LCD manufacturing often operated at 50% yield, where modern operations achieve 90%+ yields.

  • Cutting efficiency – the fraction of a sheet of mother glass that actually goes into a display — has increased, thanks to strategies like Multi-Model Glass, which allows manufacturers to cut displays of different sizes from the same sheet of mother glass.

  • The technology for filling panels with liquid crystal has improved. Until the early 2000s, displays were filled with liquid crystal through capillary action: small gaps were left in the sealant used to create the individual crystal cells, which the liquid crystal would gradually be drawn into. It could take hours, or even days, to fill a panel with liquid crystal. The development of the “one drop fill” method — a process in which each cell is filled before the sealant was cured, and then UV light is used to cure the sealant — reduced the time required to fill a panel from days to minutes.

  • Glass substrates were gradually made more durable, which reduced defects and allowed for more aggressive, faster etching.

Strategies for LCD manufacturing improvement, circa 2006. Via FPD 2006 Yearbook.

More generally, because LCD manufacturing is very similar to semiconductor manufacturing , the industry has been able to benefit from advances in semiconductor production. (As one industry expert noted in 2005, “[t]he display manufacturing process is a lot like the semiconductor manufacturing process, it’s just simpler and bigger.”) LCD manufacturing has relied on equipment originally developed for semiconductor production (such as steppers for photolithography), and has tapped semiconductor industry expertise for things like minimizing contamination. Some semiconductor manufacturers (like Sharp) have later entered the LCD manufacturing market.

LCD manufacturing has also greatly benefitted from economies of scale. A large, modern LCD fab will cost several billion dollars and produce over a million displays a day.1 It’s thanks to the enormous market for LCD screens that these huge, efficient fabs can be justified, and the investment in new, improved process technology can be recouped.

Annual LCD production, via Corning.

Falling LCD costs have also been driven by relentless competition. A 2014 presentation from Corning states that LCD “looks like a 25 year suicide pact for display manufacturers.” Manufacturers have been required to continuously make enormous investments in larger fabs and newer technology, even as profit margins are constantly threatened (and occasionally turning negative). This seems to have partly been driven by countries considering flat panel display manufacturing a strategic priority — the Corning presentation notes that manufacturing investments have been driven by nationalism, and there were various efforts to prop up US LCD manufacturing in the 90s and 2000s for strategic reasons.

Conclusion

Those who have read my book will not find much surprising in the story of TV cost declines. Virtually all the major mechanisms that can drive efficiency improvements — improving technology and overlapping S-curves, economies of scale (including geometric scaling effects), eliminating process steps, reducing variability and improving yield, advancing towards continuous process manufacturing — are on display here.

1

Though many of these will be phone-sized displays.



Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Zeno’s Paradox resolved by physics, not by math alone

1 Share

The fastest human in the world, according to the Ancient Greek legend, was the heroine Atalanta. Although she was a famous huntress who joined Jason and the Argonauts in the search for the golden fleece, she was most renowned for the one avenue in which she surpassed all other humans: her speed. While many boasted of how swift or fleet-footed they were, Atalanta outdid them all. No one possessed the capabilities to defeat her in a fair footrace. According to legend, she refused to be wed unless a potential suitor could outrace her, and remained unwed for a very long time. Arguably, if not for the intervention of the Goddess Aphrodite, she would have avoided marriage for the entirety of her life.

Aside from her running exploits, Atalanta was also the inspiration for the first of many similar paradoxes put forth by the ancient philosopher Zeno of Elea about how motion, logically, should be impossible. The argument goes something like this:

  • To go from her starting point to her destination, Atalanta must first travel half of the total distance.
  • To travel the remaining distance, she must first travel half of what’s left over.
  • No matter how small a distance is still left, she must travel half of it, and then half of what’s still remaining, and so on, ad infinitum.
  • With an infinite number of steps required to get there, clearly she can never complete the journey.
  • And hence, Zeno states, motion is impossible: Zeno’s paradox.

While there are many variants to this paradox, this is the closest version to the original that survives. Although it may be obvious what the solution is — that motion is indeed possible — it’s physics, not merely mathematics alone, that allows us to see how this paradox gets resolved.

A sculpture of Atalanta, the fastest person in the world, running in a race. If not for the trickery of Aphrodite and the allure of the three golden apples, which she stopped to pick up each time her opponent dropped one, nobody would have ever defeated Atalanta in a fair footrace.

Credit: Pierre Lepautre/Jebulon of Wikimedia Commons

The oldest “solution” to the paradox was put forth based on a purely mathematical perspective. The claim admits that, sure, there might be an infinite number of jumps that you’d need to take, but each new jump gets smaller and smaller than the previous one. Therefore, as long as you could demonstrate that the total sum of every jump you need to take adds up to a finite value, it doesn’t matter how many chunks you divide it into.

For example, if the total journey is defined to be 1 unit (whatever that unit is), then you could get there by adding half after half after half, etc. The series ½ + ¼ + ⅛ + … does indeed converge to 1, so that you eventually cover the entire needed distance if you add an infinite number of terms. You can prove this, cleverly, by subtracting the entire series from double the entire series as follows:

  • (series) = ½ + ¼ + ⅛ + …
  • 2 * (series) = 1 + ½ + ¼ + ⅛ + …
  • therefore, [2 * (series) – (series)] = 1 + (½ + ¼ + ⅛ + …) – (½ + ¼ + ⅛ + …) = 1.

That seems like a simple, straightforward, and compelling, solution, doesn’t it?

By continuously halving a quantity, you can show that the sum of each successive half leads to a convergent series: one entire “thing” can be obtained by summing up one half plus one fourth plus one eighth, etc.

Credit: Public Domain

Unfortunately, this “solution” only works if you make certain assumptions about other aspects of the problem that are never explicitly stated. This mathematical line of reasoning is only good enough to robustly show that the total distance Atalanta must travel converges to a finite value, even if it takes her an infinite number of “steps” or “halves” to get there. It doesn’t tell you anything about how long it takes her to reach her destination, or to take that potentially infinite number of steps (or halves) to get there. Unless you make additional assumptions about time, you can’t solve Zeno’s paradox by appealing to the finite distance aspect of the problem.

How is it possible that time could come into play and ruin what appears to be a mathematically elegant and compelling “solution” to Zeno’s paradox?

Because there’s no guarantee that each of the infinite number of jumps you need to take — even to cover a finite distance — occurs in a finite amount of time. If each jump took the same amount of time, for example, regardless of the distance traveled, it would take an infinite amount of time to cover whatever tiny fraction-of-the-journey remains. Under this line of thinking, it might still be impossible for Atalanta to reach her destination.

Zeno's Paradox

One of the many representations (and formulations) of Zeno of Elea’s paradox relating to the impossibility of motion. It was only through a physical understanding of distance, time, and their relationship that this paradox was able to be robustly resolved.
Credit: Martin Grandjean/Wikimedia Commons

Many thinkers, both ancient and contemporary, tried to resolve this paradox by invoking the idea of time. One attempt was made a few centuries after Zeno by the legendary mathematician Archimedes, who argued the following.

  • It must take less time to complete a smaller distance jump than it does to complete a larger distance jump.
  • And therefore, if you travel a finite distance, it must take you only a finite amount of time.
  • If that’s true, then Atalanta must finally, eventually reach her destination, and thus, her journey will be complete.

Only, this line of thinking is not necessarily mathematically airtight either. It’s eminently possible that the time it takes to finish each step will still go down: half the original time for the first step, a third of the original time for the next step, a quarter of the original time for the subsequent step, then a fifth of the original time, and so on. However, if that’s the way your “step duration” decreases, then the total journey will actually wind up taking an infinite amount of time. You can check this for yourself by trying to find what the series [½ + ⅓ + ¼ + ⅕ + ⅙ + …] sums to. As it turns out, the limit does not exist: this is a diverging series, and the sum tends toward infinity.

The harmonic series, as shown here, is a classic example of a series where each and every term is smaller than the previous term, but the total series still diverges: i.e., has a sum that tends toward infinity. It is not enough to contend that time jumps get shorter as distance jumps get shorter; a quantitative relationship is necessary.

Credit: Public Domain

It might seem counterintuitive, but even though Zeno’s paradox was initially conceived as a purely mathematical problem, pure mathematics alone cannot solve it. Mathematics is the most useful tool we have for performing quantitative analysis of any type, but without an understanding of how travel works in our physical reality, it won’t provide a satisfactory solution to the paradox. The reason is simple: the paradox isn’t simply about dividing a finite thing up into an infinite number of parts, but rather about the inherently physical concept of a rate, and specifically of the rate of traversing a distance over a duration of time.

That’s why it’s a bit misleading to hear Zeno’s paradox: the paradox is usually posed in terms of distances alone, when it’s really about motion, which is about the amount of distance covered in a specific amount of time. The Greeks had a word for this concept — τάχος — which is where we get modern words like “tachometer” or even “tachyon” from, and it literally means the swiftness of something. To someone like Zeno, however, the concept of τάχος, which most closely equates to velocity, was only known in a qualitative sense. To connect the explicit relationship between distance and velocity, there must be a physical link at some level, and there indeed is: through time.

If anything moves at a constant velocity and you can figure out its velocity vector (magnitude and direction of its motion), you can easily come up with a relationship between distance and time: you will traverse a specific distance in a specific and finite amount of time, depending on what your velocity is. This can be calculated even for non-constant velocities by understanding and incorporating accelerations, as well, as determined by Newton.

Credit: Gordon Vigurs/English Wikipedia

Because we’re not bound by the qualitative thought patterns that would’ve come along with the mention of the word τάχος here in the 21st century, we can simply think about a variety of terms that we are familiar with.

  • How fast does something move? That’s what its speed is.
  • What happens if you don’t just ask how fast it’s moving, but if you add in which direction it’s moving in? Then that speed suddenly becomes a velocity: a speed plus a direction.
  • And what’s the quantitative definition of velocity, as it relates to distance and time? It’s the overall change in distance divided by the overall change in time.

This makes velocity an example of a concept known as a rate: the amount that one quantity (distance) changes dependent on how another quantity (time) changes as well. You can have a constant velocity (without any acceleration) or you can have a velocity that evolves with time (with either positive or negative acceleration). You can have an instantaneous velocity (i.e., where you measure your velocity at one specific moment in time) or an average velocity (i.e., your velocity over a certain interval, whether a part or the entirety of a journey).

However, if something is in constant motion, then the relationship between distance, velocity, and time becomes very simple: the distance you traverse is simply your velocity multiplied by the time you spend in motion. (Distance = velocity * time.)

When a person moves from one location to another, they are traveling a total amount of distance in a total amount of time. Figuring out the relationship between distance and time quantitatively did not happen until the time of Galileo and Newton, at which point Zeno’s famous paradox was resolved not by mathematics or logic or philosophy, but by a physical understanding of the Universe.

Credit: Public Domain

This is how we arrive at the first correct resolution of the classical “Zeno’s paradox” as commonly stated. The reason that objects can move from one location to another (i.e., travel a finite distance) in a finite amount of time is not because their velocities are not only always finite, but because they do not change in time unless acted upon by an outside force: a statement equivalent to Newton’s first law of motion. If you consider a person like Atalanta, who moves at a constant speed, you’ll find that she can cover any distance you choose in a specific amount of time: a time prescribed by the equation that relates distance to velocity, time = distance/velocity.

While this isn’t necessarily the most common form that we encounter Newton’s first law in (e.g., objects at rest remain at rest and objects in motion remain in constant motion unless acted on by an outside force), it very much arises from Newton’s principles when applied to the special case of constant motion. If you consider the total distance you need to travel and then halve the distance that you’re traveling, it will take you only half the time to traverse it as it would to traverse the full distance. To travel (½ + ¼ + ⅛ + …) the total distance you’re trying to cover, it takes you (½ + ¼ + ⅛ + …) the total amount of time to do so. And this works for any distance, no matter how arbitrarily tiny, you seek to cover.

Whether it’s a massive particle or a massless quantum of energy (like light) that’s moving, there’s a straightforward relationship between distance, velocity, and time. If you know how fast your object is going, and if it’s in constant motion, distance and time are directly proportional.

Credit: John D. Norton/University of Pittsburgh

For anyone interested in the physical world, this should be enough to resolve Zeno’s paradox. It works whether space (and time) is continuous or discrete; it works at both a classical level and a quantum level; it doesn’t rely on philosophical or logical assumptions. For objects that move in this Universe, simple Newtonian physics is completely sufficient to solve Zeno’s paradox.

But, as with most things in our classical Universe, if you go down to the quantum level, an entirely new paradox can emerge, known as the quantum Zeno effect. Certain physical phenomena only happen due to the quantum properties of matter and energy, like quantum tunneling through a thin and solid barrier, or the radioactive decay of an unstable atomic nucleus. In order to go from one quantum state to another, your quantum system needs to behave like a wave: with a wavefunction that spreads out over time.

Eventually, the wavefunction will have spread out sufficiently so that there will be a non-zero probability of winding up in a lower-energy quantum state. This is how you can wind up occupying a more energetically favorable state even when there isn’t a classical path that allows you to get there: through the process of quantum tunneling.

pulse light quantum tunnel barrier

By firing a pulse of light at a semi-transparent/semi-reflective thin medium, researchers can measure the time it must take for these photons to tunnel through the barrier to the other side. Although the step of tunneling itself may be instantaneous, the traveling particles are still limited by the speed of light.

Credit: J. Liang, L. Zhu & L.V. Wang, 2018, Light: Science & Applications

Remarkably, there’s a way to inhibit quantum tunneling, and in the most extreme scenario, to prevent it from occurring at all. All you need to do is this: just observe and/or measure the system you’re monitoring before the wavefunction can sufficiently spread out so that it overlaps with a lower-energy state it can occupy. Most physicists are keen to refer to this type of interaction as “collapsing the wavefunction,” as you’re basically causing whatever quantum system you’re measuring to act “particle-like” instead of “wave-like.” But a wavefunction collapse is just one interpretation of what’s happening, and the phenomenon of quantum tunneling is real regardless of your chosen interpretation of quantum physics.

Another — perhaps more general — way of looking at the quantum version of Zeno’s paradox is that you’re restricting the possible quantum states your system can be in through the act of observation and/or measurement. If you make this measurement too close in time to your prior measurement, there will be an infinitesimal (or even a zero) probability of tunneling into your desired state. If you keep your quantum system interacting with the environment, you can suppress the inherently quantum effects, leaving you with only the classical outcomes as possibilities: effectively forbidding quantum tunneling from occurring.

quantum tunneling

When a quantum particle approaches a barrier, it will most frequently interact with it. But there is a finite probability of not only reflecting off of the barrier, but tunneling through it. If you were to measure the position of the particle continuously, however, including upon its interaction with the barrier, this tunneling effect could be entirely suppressed via the quantum Zeno effect.

Credit: Yuvalr/Wikimedia Commons

The key takeaway is this: motion from one place to another is possible, and the reason it’s possible is because of the explicit physical relationship between distance, velocity, and time. With those relationships in hand — i.e., that the distance you traverse is your velocity multiplied by the duration of time you’re in motion — we can learn exactly how motion occurs in a quantitative sense. Yes, in order to cover the full distance from one location to another, you have to first cover half that distance, then half the remaining distance, then half of what’s left, etc., which would take an infinite number of “halving” steps before you actually reach your destination.

But the time it takes to take each “next step” also halves with respect to each prior step, so motion over a finite distance always takes a finite amount of time for any object in motion. Even today, the Zeno’s paradox puzzle still remains an interesting exercise for mathematicians and philosophers. Not only is the solution reliant on physics, but physicists have even extended it to quantum phenomena, where a new quantum Zeno effect — not a paradox, but a suppression of purely quantum effects — emerges.

For motion, its possibility, and how it occurs in our physical reality, mathematics alone is not enough to arrive at a satisfactory resolution. As in all scientific fields, only the Universe itself can actually be the final arbiter for how reality behaves. Thanks to physics, Zeno’s original paradox gets resolved, and now we can all finally understand exactly how.

A version of this article was first published in February of 2022. It was updated in January of 2026.

This article Zeno’s Paradox resolved by physics, not by math alone is featured on Big Think.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories