191 stories
·
0 followers

Will AI Transform Teaching and Learning?

1 Share

Recently, I was invited to be part of a five member panel at Google to discuss the impact that AI will have on teaching and learning in schools. My fellow panelists were drawn from the technology sector. As a historian of schooling and veteran high school teacher, I was expected to offer a brief perspective about previous technological innovations that had entered classrooms. Here is what I said to the 200 participants:

Over the past century, every technology introduced to improve teaching and learning has been hyped as “revolutionary” and ”transformational.”

Consider this list:

*Radios in classrooms

*16mm movies

*Overhead Projectors

*Instructional television

*video-casettes

*1:1 laptops

*Interactive Whiteboards

That inflated vocabulary of previous classroom technologies triggering sweeping changes in teaching and learning continues in 2024.

In speaking of AI recently, the Dean of the Stanford’s Graduate School of Education, said:

“[This Technology] is a game-changer for education – it offers the prospect of universal access to high-quality learning experiences, and it creates fundamentally new ways of teaching.”

Yet there is little evidence that classroom use of these previous technologies forced classroom teachers to rethink, much less reshape, their instruction. Nor have I found convincing evidence that these technologies altered fundamentally how teachers teach, increased student engagement, or raised test scores.

So I have concluded that those pushing AI use in classrooms fail to understand the complexity of teaching.*

Why do I say that?

Promoters of AI have attended public and private schools for nearly two decades and sat at desks a few feet away from their teachers. Such familiarity encouraged AI advocates to think that they knew thoroughly what teaching was like and how it was done. That familiarity trapped promoters of AI into misunderstanding the sheer complexity of teaching especially the cordial relationships that teachers must build with their students.

Anyone who has taught at least three to five years appreciates the extensive knowledge, skills, and emotional connections needed to get kindergartners, fifth graders, or high school seniors to learn. By “emotional connections, ” I mean building relationships with individual students and a class are paramount in getting students to learn. Few boosters of AI, for example, seldom mention that a teacher-student relationship is unlike a student-machine connection.

Teaching, then, is not a mechanical act of connecting dots. Teaching is a complex act that requires knowledge of subject matter, managerial skills, and emotional labor. Often, it is improvisational. Most important, however, is that teaching requires gaining students’ trust. It is both an art and a science that takes years to master.

This brief history of hyped-up technological innovations previously adopted by public schools and the lack of causal links between these new technologies and altering how teachers teach or student learn may feel like I am raining on the parade of AI promoters. So be it.

While I believe AI will not force practitioners to rethink how they teach, nonetheless, as so many teachers have done in the past, they will adapt AI to fit the contours of their classrooms. And in doing so, AI may become just another item added to the list of previously hyped technological innovations that evoked initial gasps of delight and slowly became part of many teachers’ repertoires.

Or maybe AI in classrooms will become just a footnote in a future doctoral student’s dissertation. Too early to say.

——————

* Beyond understanding completely the complexity of teaching, perhaps those who create these tools are driven by the simple fact that the U.S. public schools market is large (i.e., nearly 50 million students and three million teachers) and lucrative. Further, schools are just as vulnerable to technological fads as are women’s fashions, deodorants, and automobile styling.



Read the whole story
mrmarchant
7 hours ago
reply
Share this story
Delete

Science is a strong-link problem

1 Comment and 2 Shares
Photo cred: my dad

There are two kinds of problems in the world: strong-link problems and weak-link problems.1

Weak-link problems are problems where the overall quality depends on how good the worst stuff is. You fix weak-link problems by making the weakest links stronger, or by eliminating them entirely.

Food safety, for example, is a weak-link problem. You don’t want to eat anything that will kill you. That’s why it makes sense for the Food and Drug Administration to inspect processing plants, to set standards, and to ban dangerous foods. The upside is that, for example, any frozen asparagus you buy can only have “10% by count of spears or pieces infested with 6 or more attached asparagus beetle eggs and/or sacs.” The downside is that you don’t get to eat the supposedly delicious casu marzu, a Sardinian cheese with live maggots inside it.

It would be a big mistake for the FDA to instead focus on making the safest foods safer, or to throw the gates wide open so that we have a marketplace filled with a mix of extremely dangerous and extremely safe foods. In a weak-link problem like this, the right move is to minimize the number of asparagus beetle egg sacs.

Weak-link problems are everywhere. A car engine is a weak-link problem: it doesn’t matter how great your spark plugs are if your transmission is busted. Nuclear proliferation is a weak-link problem: it would be great if, say, France locked up their nukes even tighter, but the real danger is some rogue nation blowing up the world. Putting on too-tight pants is a weak-link problem: they’re gonna split at the seams.

It’s easy to assume that all problems are like this, but they’re not. Some problems are strong-link problems: overall quality depends on how good the best stuff is, and the bad stuff barely matters. Like music, for instance. You listen to the stuff you like the most and ignore the rest. When your favorite band releases a new album, you go “yippee!” When a band you’ve never heard of and wouldn’t like anyway releases a new album, you go…nothing at all, you don’t even know it’s happened. At worst, bad music makes it a little harder for you to find good music, or it annoys you by being played on the radio in the grocery store while you’re trying to buy your beetle-free asparagus.

Because music is a strong-link problem, it would be a big mistake to have an FDA for music. Imagine if you could only upload a song to Spotify after you got a degree in musicology, or memorized all the sharps in the key of A-sharp minor, or demonstrated competence with the oboe. Imagine if government inspectors showed up at music studios to ensure that no one was playing out of tune. You’d wipe out most of the great stuff and replace it with a bunch of music that checks all the boxes but doesn’t stir your soul, and gosh darn it, souls must be stirred.

Strong-link problems are everywhere; they’re just harder to spot. Winning the Olympics is a strong-link problem: all that matters is how good your country’s best athletes are. Friendships are a strong-link problem: you wouldn’t trade your ride-or-dies for better acquaintances. Venture capital is a strong-link problem: it’s fine to invest in a bunch of startups that go bust as long as one of them goes to a billion.

Figuring out whether a problem is strong-link or weak-link is important because the way you solve them is totally different:

When you’re looking to find a doctor for a routine procedure, you’re in a weak-link problem. It would be great to find the best doctor on the planet, of course, but an average doctor is fine—you just want to avoid someone who’s going to prescribe you snake oil or botch your wart removal. For you, it’s great to live in a world where doctors have to get medical degrees and maintain their licenses, and where drugs are thoroughly checked for side effects.

But if you’re diagnosed with a terminal disease, you’re suddenly in a strong-link problem. An average doctor won’t cut it for you anymore, because average means you die. You need a miracle, and you’re furious at anyone who would stop that from happening: the government for banning drugs that might help you, doctors who refuse to do risky treatments, and a medical establishment that’s more worried about preventing quacks than allowing the best healers to do as they please.

REST IN PEACE LIL SPERM BOYS

Science is a strong-link problem.

In the long run, the best stuff is basically all that matters, and the bad stuff doesn’t matter at all. The history of science is littered with the skulls of dead theories. No more phlogiston nor phlegm, no more luminiferous ether, no more geocentrism, no more measuring someone’s character by the bumps on their head, no more barnacles magically turning into geese, no more invisible rays shooting out of people’s eyes, no more plum pudding, and, perhaps saddest of all, no more little dudes curled up inside sperm cells:

Goodby lil dudes, we hardly knew you

Our current scientific beliefs are not a random mix of the dumbest and smartest ideas from all of human history, and that’s because the smarter ideas stuck around while the dumber ones kind of went nowhere, on average—the hallmark of a strong-link problem. That doesn’t mean better ideas win immediately. Worse ideas can soak up resources and waste our time, and frauds can mislead us temporarily. It can take longer than a human lifetime to figure out which ideas are better, and sometimes progress only happens when old scientists die. But when a theory does a better job of explaining the world, it tends to stick around.

(Science being a strong-link problem doesn’t mean that science is currently strong. I think we’re still living in the Dark Ages, just less dark than before.)

Subscribe now

SCIENTIFIC KRYPTONITE

Here’s the crazy thing: most people treat science like it’s a weak-link problem.

Peer reviewing publications and grant proposals, for example, is a massive weak-link intervention. We spend ~15,000 collective years of effort every year trying to prevent bad research from being published. We force scientists to spend huge chunks of time filling out grant applications—most of which will be unsuccessful—because we want to make sure we aren’t wasting our money. 

These policies, like all forms of gatekeeping, are potentially terrific solutions for weak-link problems because they can stamp out the worst research. But they’re terrible solutions for strong-link problems because they can stamp out the best research, too. Reviewers are less likely to greenlight papers and grants if they’re novel, risky, or interdisciplinary. When you’re trying to solve a strong-link problem, this is like swallowing a big lump of kryptonite.

(Peer review also does a pretty bad job at stamping out bad research too, oops.)

Giant replication projects—like this one, this one, this one, this one, and this one—also only make sense for weak-link problems. There’s no point in picking some studies that are convenient to replicate, doing ‘em over, and reporting “only 36% of them replicate!” In a strong-link situation, most studies don’t matter. To borrow the words of a wise colleague: “What do I care if it happened a second time? I didn’t care when it happened the first time!”

This is kind of like walking through a Barnes & Noble, grabbing whichever novels catch your eye, and reviewing them. “Only 36% of novels are any good!” you report. That’s fine! Novels are a strong-link problem: you read the best ones, and the worst ones merely take up shelf space. Most novels are written by Danielle Steel anyway.

(See also: Psychology might be a big stinkin’ load of hogwash and that’s just fine.)

CHEATERS SOMETIMES WIN AND THAT’S OKAY

I think there are two reasons why scientists act like science is a weak-link problem.

The first reason is fear. Competition for academic jobs, grants, and space in prestigious journals is more cutthroat than ever. When a single member of a grant panel, hiring committee, or editorial board can tank your career, you better stick to low-risk ideas. That’s fine when we’re trying to keep beetles out of asparagus, but it’s not fine when we’re trying to discover fundamental truths about the world.

(See also: Grant funding is broken. Here’s how to fix it.)

The second reason is status. I’ve talked to a lot of folks since I published The rise and fall of peer review and got a lot of comments, and I’ve realized that when scientists tell me, “We need to prevent bad research from being published!” they often mean, “We need to prevent people from gaining academic status that they don’t deserve!” That is, to them, the problem with bad research isn’t really that it distorts the scientific record. The problem with bad research is that it’s cheating.

I get that. It’s maddening to watch someone get ahead using shady tactics, and it might seem like the solution is to tighten the rules so we catch more of the cheaters. But that’s weak-link thinking. The real solution is to care less about the hierarchy. If you spend your life yelling at bad scientists, you’ll make yourself hoarse. If you spend your life trying to do great science, you might forever change the world for the better, which seems like a better use of time.

THE MISSING STRONG LINKS

Here’s our reward for a generation of weak-link thinking.

The US government spends ~10x more on science today than it did in 1956, adjusted for inflation. We’ve got loads more scientists, and they publish way more papers. And yet science is less disruptive than ever, scientific productivity has been falling for decades, and scientists rate the discoveries of decades ago as worthier than the discoveries of today. (Reminder, if you want to blame this on ideas getting harder to find, I will fight you.)

We should have seen this coming, because the folks doing the strongest-link research have been warning us about it for a long time. One of my favorite genres is “Nobel Prize winner explains how it would be impossible to do their Nobel Prize-winning work today.” For instance, here’s Peter Higgs (Nobel Prize in Physics, 2013):

Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.

Sydney Brenner (Nobel Prize in Physiology or Medicine, 2002) on Frederick Sanger (Nobel Prize in Chemistry, 1958 & 1980):

A Fred Sanger would not survive today's world of science. With continuous reporting and appraisals, some committee would note that he published little of import between insulin in 1952 and his first paper on RNA sequencing in 1967 with another long gap until DNA sequencing in 1977. He would be labeled as unproductive, and his modest personal support would be denied. We no longer have a culture that allows individuals to embark on long-term—and what would be considered today extremely risky—projects.

Carol Greider (Nobel Prize in Physiology or Medicine, 2009):

“I’m not sure in the current climate we have for research funding that I would have received funding to be able to do the work that led to the Nobel Prize,” Greider said at a National Institutes of Health (NIH) event last month, adding that her early work on enzymes and cell biology was well outside the mainstream.

John Sulston (Nobel Prize in Physiology or Medicine, 2002):

I wandered along to the chemistry labs, more or less on the rebound, and asked about becoming a research student. It was the 60s, a time of university expansion: the doors were open and a 2:1 [roughly equivalent to a B] was good enough to get me in. I couldn’t have done it now.

Jeffrey C. Hall (Nobel Prize in Physiology or Medicine, 2017):

I admit that I resent running out of research money. [...] In my day you could get a faculty job with zero post-doc papers, as in the case of yours truly; but now the CV of a successful applicant looks like that of a newly minted full Professor from olden times. [...] US institutions (possibly also those in other countries) behave as though they and their PIs are entitled to research funding, which will magically materialize from elsewhere: ‘Get a grant, serf! If you can't do it quickly, or have trouble for some years — or if your funding doesn't get renewed, despite continuing productivity — forget it!’ But what if there are so many applicants (as there are nowadays) that even a meritorious proposal gets the supplicant nowhere or causes a research group to grind prematurely to a halt? [...] Thus, as I say ‘so long,’ one component of my last-gasp disquiet stems from pompously worrying about biologists who are starting out or are in mid-career.

It goes on and on like this. When the people doing the best work are saying “hey there’s no way you could do work like this anymore,” maybe we should listen to them.

WHAT TO DO WHEN YOU STINK

I’ve got a hunch that science isn’t the only strong-link problem we’ve mistakenly diagnosed as a weak-link problem. It’s easy to get your knickers in a pinch about weak links—look at these bad things!! They’re so bad!! Can you believe how bad they are?? 

It’s even easier to never think about the strong links that were prevented from existing. The terrible study that gets published sounds like nails on a chalkboard, but the terrific study that never got funded sounds like nothing at all. Purge all the terrible at the cost of the terrific, and all you’re left with is the mediocre.

Of course, it’s also easy to make the opposite mistake, to think you’re facing a strong-link problem when in fact you’ve got a weak-link problem on your hands. It doesn’t really matter how rich the richest are when the poorest are starving. Issuing parking tickets is pointless when people are getting mugged on the sidewalk. Upgrading your wardrobe is a waste when you stink like a big fart.

Whether we realize it or not, we’re always making calls like this. Whenever we demand certificates, credentials, inspections, professionalism, standards, and regulations, we are saying: “this is a weak-link problem; we must prevent the bad!” 

Whenever we demand laissez-faire, the cutting of red tape, the letting of a thousand flowers bloom, we are saying: “this is a strong-link problem; we must promote the good!”

When we get this right, we fill the world with good things and rid the world of bad things. When we don’t, we end up stunting science for a generation. Or we end up eating a lot of asparagus beetles.

Experimental History only exists because of the support of readers like you. If you like reading it, join ‘em in supporting it!

1

I originally heard these terms on this podcast discussing this book.

Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete
1 public comment
sarcozona
641 days ago
reply
I agree with a lot in this article but it totally misses the role science actually plays in our society and how letting garbage (like the Wakefield vaccine-autism bullshit) get published has long term, demonstrable, real world harm.
Epiphyte City

Disrupting Networks: Decentralization and the Fediverse

1 Share

Users of social media have become increasingly sensitive to the fallibilities of centralized networks, such as X/Twitter, Facebook, and Instagram, that are managed by a single company. Frustrations can arise as new recommendation algorithms and advertising strategies are rolled out, company priorities shift, or leadership changes. It is a phenomenon illustrated by the well-publicized exodus of X/Twitter users following Elon Musk’s acquisition and revamp of the network.

As users seek out alternative platforms that are not driven by the mission of a single company, the online social ecosystem is becoming multifaceted and unpredictable. Millions of individuals and organizations are now navigating less-familiar networks, such as Mastodon and Bluesky. According to a real-time counter developed by Theo Sanderson, a researcher at The London School of Hygiene & Tropical Medicine (LSHTM) in the U.K., Bluesky alone had over 23 million active users at the end of November, compared to nine million just two months earlier.

Whether there will be a mass migration from centralized networks is unclear. Such an upheaval entails migrating, or perhaps losing, years of accumulated data and followers. However, developers and advocates of decentralization are ramping up efforts to improve and expand networks; they also are seeking solutions to some of the technical challenges that popularity brings.

An open-source approach to functionality and control

The fundamental building blocks of decentralized networks are open-source communication protocols such as Diaspora, Matrix, Nostr, and ActivityPub. The latter was standardized by the World Wide Web Consortium (W3C) in 2018 and is currently the most widely used protocol. Unlike centralized networks where all interactions are processed via servers hosted by a single company, these protocols enable anyone to set up a server, or ‘instance’, where users can create, share, and retrieve content. They also support interoperability between networks with shared protocols.

“Mastodon and other decentralized applications emerge in stark contrast to the traditionally centralized ones,” explained Ignacio De Castro Arribas, an expert in online social networks at Queen Mary University of London in the U.K. “These centralized applications are typically monolithic and vertically integrated with the application providing most functionalities and these being controlled by a relatively vertical governance model.”

Interoperability has fueled the growth of a massive ecosystem of independent servers, known by the umbrella term Fediverse, where users from different networks can interact. Fediverse observatories, such as Fediverse Party, and FediDB, gather and share Fediverse data. In November 2024, over 25,000 servers were listed, including Friendica for microblogging, PeerTube and Funkwhale for video/audio hosting, Lemmy for news aggregation and discussion, and PixelFed for image hosting.

While many decentralized servers deploy the ActivityPub protocol, Bluesky a network which has garnered intense media attention in recent monthsuses the Authenticated Transfer Protocol, or AT Protocol. Bluesky’s functionality and user interface are reminiscence of early Twitter, perhaps contributing to its current popularity. The network originated as an internal project at Twitter in 2019 under then-CEO Jack Dorsey.

Bluesky’s use of the AT Protocol has prompted online discussion within tech communities as to whether it is part of the Fediverse, and unlike some federated networks, Bluesky has a CEO, currently Jay Graber. However, the network’s backbone centers on decentralization. In the 2023 paper that introduced Bluesky’s architecture and the AT Protocol, Graber and her co-authors set out the protocol’s aims:

“To enable decentralization by having multiple interoperable providers for every part of the system; to make it easy for users to switch providers; to give users agency over the content they see; and to provide a simple user experience that does not burden users with complexity arising from the system’s decentralized nature.”

Earlier this year, De Castro Arribas and co-authors undertook a comprehensive study of Bluesky. The researchers found a diverse communityin terms of languages usedthat has embraced new features offered by the network, “in particular those related to content moderation and curation; these seem to be the ones with the lowest barriers to entrance for developers, and many users have created labels and feeds,” De Castro Arribas explained.

The ability to build new functionality on Bluesky’s freely available base code is now being deployed to support new users as they move over from other networks, said De Castro Arribas, “It is a vibrant ecosystem of people developing tools that help with the migration and identification of people to follow, like the starter packs.”

Christina Dunbar-Hester, an expert in the democratic control of technologies at the University of Southern California Annenberg School of Communication and Journalism, also highlights the importance of developer and user accessibility in decentralized networks. Said Dunbar-Hester, “The need for something that’s noncommercial, decentralized, and public interest orientedand I would break those up somewhat separatelyhas never been stronger.”

No single entity can take control of a network that is managed by individuals and there is no standardization across decentralized ecosystems. Yet, this also leads to technical inconsistencies and debates around terminology and functionality in the Fediverse. This is healthy, said Dunbar-Hester, because “It is an important conversation to be having. Regardless of how it ultimately plays out, whether it rises or falls, I still think it’s a really important experiment.”

The pitfalls of popularity

Many advocates of decentralization herald its potential to disrupt traditional social networks and create environments that are more accessible, flexible, and accountable. However, some researchers point to the emerging challenges that decentralized networks face as user numbers grow.

Emma Tosch, Luis Garcia, and Chris Martens, researchers at Northeastern University in Boston, and independent researcher Cynthia Li surveyed over 100 Mastodon administrators and carried out a text analysis of 351 privacy policies on the network. The team found inconsistencies in approaches to privacy that suggest “existing individualistic frameworks for thinking about privacy policies do not adequately address this emerging community.” Greater support in the form of “privacy-enhancing technology” is required to improve both users’ understanding of privacy and administrator’s creation of policies, they concluded.

De Castro Arribas, meanwhile, highlights issues with moderation. While anyone can create a server/instance on a Fediverse application, governance of each instance is itself centralized, so “there is a top-down approach with the instance moderator having the power to moderate the content in the instance,” he said. This creates challenges in terms of consistency and heavy workload for (often hobbyist) administrators.

“To make things more complex, instance administrators only have full moderation control over the content generated in their own instance, however users in such an instance subscribe to content in other ones,” De Castro Arribas explained.

An individualistic, decentralized approach can also restrict access to training data for algorithms designed to implement tasks such as toxic content identification and content recommendation. Said De Castro Arribas, “There are potential ways to alleviate this with federated learning where multiple instances pool their data to train a model,” a solution presented in a paper he co-authored.

This growing and unpredictable ecosystem of decentralized networks is characterized by a lack of standardization, varying functionality and terminology, and unfamiliar user interfaces. It is not a technological advance that can be neatly defined or easily controlled, and for many developers and end users that is precisely its appeal.

Karen Emslie is a location-independent freelance journalist and essayist.

Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete

What's the deal with magnetic fields?

1 Share

In a recent article, I brought up an important gotcha about electricity:

“In electronic circuits, the flow of electrons is confined to conductors, but the transfer of energy doesn’t involve these particles bouncing off each other; instead, the process is mediated through electromagnetic fields. These fields originate from charge carriers, but extend freely into the surrounding space.”

This property stumps many novices, especially as cling on to the didactical toll known as the hydraulic analogy. Without considering interactions at a distance, it’s hard to grasp the behavior of transistors, capacitors, inductors, and other electronic components.

But how do these fields behave? The electric field is the easy one: simplifying a bit, it’s the static force of attraction and repulsion between charge-bearing particles. It’s what binds electrons to the nucleus of an atom, and what causes packing peanuts to stick to cats.

Electric fields at work. By Sean McGrath via Wikipedia, CC-BY.

Magnetic fields are more mystifying. Most textbooks assert that they’re a manifestation of the same underlying phenomenon, but then proceed to treat them as wholly separate and governed by a bunch of seemingly ad-hoc rules. The authors reach into a hat and pull out a “B field” or an “H field” that acts on moving particles… but only when the textbook says it should.

A detour through space

To develop better intuition, we ought to start with the speed of light. The significance of the concept is sometimes misunderstood, but in the most basic sense, it appears to be a fundamental constraint on causality: you can’t exert influence at a distance any faster than it would take for a photon — a massless elementary particle — to travel from here to there.

This creates an interesting problem for Newtonian physics. Let’s imagine that a guy named Finn is sitting in a spaceship that’s flying away from you at 90% the speed of light. For safety, the hull of his spaceship is equipped with a pair of flashing light beacons: one in front and one in the back, roughly 100 meters apart.

Goodbye, Finn.

The rear beacon flashes on a timer. As for the other one, Finn didn’t want to add 100 meters of wiring throughout the ship, so he just rigged a photodetector to pick up the flashing from the other beacon and trigger on that.

Conventionally, your frame of reference is as good as Finn’s; in fact, he might argue that he’s stationary and you’re the one getting away. There should be nothing unusual about physics onboard his ship. If Finn is watching the beacons, the rear one should be turning on at some given time, and then at t + 330 nanoseconds, the photons traveling at the speed of light should make it across the hull and toggle the front one.

But in your frame of reference, this ain’t right! By the time the photons from the rear beacon on Finn’s ship travel 100 meters, the front of the ship has gotten away and is now 90 meters ahead. Either Finn’s photons are a lot faster than yours, or more time needs to pass before the front beacon turns on.

An early attempt to address this issue was the concept of luminiferous aether: a cosmic medium through which photons supposedly propagate at a constant speed. This theory implied the existence of a special, “aether-anchored” frame of reference. If that happened to be your frame, then Finn, tearing through the aether at breakneck speeds, would see the physics on his spaceship getting out of whack.

Alas, no proof for the existence of luminiferous aether could ever be produced; instead, the answer turned out to be special relativity. The theory posits that all inertial frames of reference are equivalent. Instead, the trick is that when the frames are in relative motion, they experience space and time in different ways. In particular, in our frame of reference, instantaneous measurements of Finn’s ship would indicate it’s shorter than expected, and the time onboard would be seemingly passing more slowly than on Earth.

But let’s stick to magnets

To offer a basic explanation of magnetism, we really only need to lean on relativistic length contraction: the apparent reduction of the size and distance between moving objects along the direction of their travel.

Let’s consider a copper wire at rest. In the metal, there’s a certain of mobile electrons in the conduction band, skating across a lattice with a matching number of immobile, positively-charged copper ions. Internal electric fields require the electrons to stay in the conductor, but they’re not kept on a particularly tight leash:

A very simple model of a copper conductor.

In this setup, because the positive and negative charges in the conductor are in balance, there is no net electric field acting on the stray charges nearby — neither in the reference frame of the stationary charge, nor of the moving one.

For the next step, it’s helpful to imagine a toy circuit: a one-meter loop of wire with one hundred mobile electrons inside. It’s a simple necessity that in the stationary (“lab”) frame of reference, the average distance between these electrons stays constant — 1 cm — no matter if the’re staying put or circulating around the loop. The only way to change their spacing would be to add electrons or take away some; otherwise, it’s always 100 elementary particles spread throughout a meter of wire.

But electrons in motion are supposed to exhibit length contraction! That is to say, once they start moving in relation to the observer, their spacing in the direction of travel becomes tighter than the “true” distance seen in their own frame of reference. The only way to explain this discrepancy is that when electrons are set in motion in our circuit, their proper, non-contracted spacing must increase.

With this in mind, let’s revisit the earlier conductor model, this time with some current flowing through. As we established, in the lab frame of reference, the length-contracted density of electrons and copper ions must stay the same as before — so there is no net electric field acting on a nearby stationary charge:

Copper conductor with some current flowing.

But what about a random charge outside the conductor that’s traveling in the same direction as the electrons? Well, from its perspective, the conductor consists of a bunch of stationary electrons spaced pretty far apart, and then a markedly higher density of positive ions!

In other words, the charge, in its frame of reference, would experience a net electric field that’s pulling it toward the conductor — but only if it was moving in the first place:

The perspective of a nearby moving charge.

And this is, in essence, the origin of magnetic fields. It’s not always useful to analyze them this way — but at the very least, it’s good to have a slightly more intuitive explanation of where they come from.

Subscribe now

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Freight

1 Share

A problem from Cambridge mathematician J.E. Littlewood’s Miscellany (1953):

Is it possible to pack a cube with a finite number of smaller cubes, no two of which are the same size?

Click for Answer
Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Taste is the most important factor in nutrition.

1 Share

fitnessadvicethatfitsinyourlife:

Taste is the most important factor in nutrition.

Because you get the most nutrients from the foods you’ll actually eat.

So add cheese, oil, spices, vinegar, sauces, etc. Try them roasted or sauteed or pureed, etc.

The actual secret to eating lots of fruits and veggies and other nutrient dense foods is:

Make them taste good. That’s literally it.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories