1093 stories
·
1 follower

Fantasy Racism

1 Share

It likely isn’t news to you that Dungeons & Dragons became the world’s most popular tabletop role-playing game while also trafficking in outmoded and problematic concepts around race and gender. It also likely isn’t news to you that Dungeons & Dragons managed to expand and evolve, welcoming a wider audience and becoming more popular than it had ever been. But maybe it is news to you that some people who own the platform formerly known as Twitter seem to have feelings about this. (It was news to me! Don’t forget, though, that news doesn’t have to be surprising to be news.) Anyway, Atlantic writer and unabashed D&D devotee Adam Serwer digs into how fantasy tropes became what they are were, and why a certain breed of gamer seems to cling to them so fiercely.

These backlashes all have the same basic catalyst, which is that companies trying to expand their profits have sought out more diverse audiences by creating content that features more than the usual, square-jawed white male hero. When the damsels who were supposed to be in distress and the members of the races that were supposed to be disposable began to be the protagonists, some fans experienced that as a kind of loss. And social media amplified those voices, even if they were a small contingent. Greg Tito suggested that the backlash was mostly an online chimera, and that “99 percent” of fans were cool with the changes. The 1 percent who weren’t just happened to include, well, the “one percent.”

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

How the “meter” came to be exactly one meter long

1 Share

Measurement standards are needed for knowing “how much” exists.

A teacher giving rulers to children of the second grade (8 years old) in a primary school in Vaasa, on their second day of school in Finland. The ability to measure, quantitatively, “how much” of something you have is a key aspect at the foundation of all quantitative endeavors.
Credit: Olivier Morin / Getty Images

Early distance standards, like “cubits” or “feet,” were based on body parts.

Ancient clay tablet fragment, approximately one meter long, featuring cuneiform script and numeric symbols arranged in rows and columns.

This Ancient Egyptian artifact shows a fragment of a cubit measuring rod. Note the markings at the bottom of the rod showing various fractions of a cubit: forerunners of divisions like inches, centimeters, and millimeters.
Credit: Gift of Dr. and Mrs. Thomas H. Foulds, 1925/Metropolitan Museum of Art

A single “pace” was often used: around one yard/meter.

Person in a jacket and leggings walks along the shore at sunset, staying just one meter from the waves, with a colorful sky in the background.

A pace, either defined by a single stride as shown or a “return to the same foot” stride, was used as the original definition of a mile, where 1000 or 2000 paces defined that mile. Before a meter was defined by a pendulum’s length, this non-consistent standard was frequently used for similar distances.
Credit: Humphrey Muleba/public domain

The idea of a “standard meter” came from pendulum observations.

A diagram illustrating the movement of two spheres.

A pendulum, so long as the weight is all in the bob at the bottom while air resistance, temperature changes, and large angle effects can be neglected, will always have the same period when subject to the same gravitational acceleration. The fact that the same pendulum swung at different rates between different locations in Europe and the Americas was a hint toward Newton’s gravitation, and the variation of surface gravity in a latitude-dependent fashion.
Credit: Krishnavidala/Wikimedia Commons

A swinging pendulum’s period is determined by two factors: length and gravity.

Original Huygens Pendulum clock

The front view (left) and side/schematic view (right) of the first pendulum clock ever built, in 1656/7, which was designed by Christiaan Huygens and built by Salomon Coster. The drawings come from Huygens’ 1658 treatise, Horologium. Many subsequent refinements, even prior to Newton’s gravity, were made to this original design; Huygens’ second pendulum clock, built in 1673, was designed to have each half-swing last for precisely one second.
Credit: Christiaan Huygens, 1658

A seconds pendulum, where each half-swing lasts one second, requires a pendulum one meter long.

single pendulum

In general, there are only two factors that determine the period of a pendulum: its length, where longer pendulums take longer to complete one oscillation, and the acceleration due to gravity, where larger amounts of gravity results in faster pendulum swings. This is why a pendulum clock is not universal, but must be calibrated to the specific gravitational acceleration at its location.
Credit: Daniel A. Russell/Penn State University

Because gravity varies by ~0.2% across Earth, any pendulum-based “length” isn’t universal.

Global geoid map with colorful variations representing differences in Earth's gravity field. The scale ranges from -100 (low) to 100 (high). Europe's position is central.

The gravitational field on Earth varies not only with latitude, but also with altitude and in other ways, particularly due to crustal thickness and the fact that the Earth’s crust effectively floats atop the mantle. As a result, the gravitational acceleration varies by a few tenths of a percent across Earth’s surface.
Credit: C. Reigber, Journal of Geodynamics, 2005

In 1790, the meter was defined as 1/10,000,000th the distance from the North Pole to the equator.

An oval world map intricately blends continents and oceans, adorned with latitude and longitude lines, crafting a cosmos of exploration.

This map shows the entire globe of the Earth projected onto a Mollweide projection, where areas are accurate and well-preserved and the map is fully connected, with no gaps. However, the perpendicularity of latitude and longitude is sacrificed at high latitudes and far away from the centrally projected longitude, while preserving the areas of the land masses and oceans. A meter was once defined as 1/10,000,000th the distance from the North Pole to the equator, along the meridian passing through Paris, France.

Credit: Strebe/Wikimedia Commons

That distance was then cast into a platinum bar.

Two views of a metal bar with an I-beam cross section, one meter long, shown flat from above and at an angle on a gray background.

These two images show two bars that defined the meter: the top 100% platinum bar brought to the United States in 1799 at top, and the second meter bar following 1875’s Treaty of the Meter, received by President Benjamin Harrison in 1890, where this meter bar (No. 27) became the reference standard for all length measurements until 1960.
Credit: NIST Research Library and Museum

After correcting an early error of 0.2 millimeters, these bars became distance standards for decades.

Close-up of two metal beams, each one meter long with an H-shaped cross-section, placed parallel on a red surface.

Even though advances in quantum physics enabled superior definitions of the meter starting in 1927, the platinum-iridium bar, with an X-shape and with a meter determined by markings along it rather than by the cut ends of the bar itself, remained the global standard until 1960.
Credit: NIST/National Institute of Standards and Technology

Platinum-iridium alloys, with X-shapes to better resist distortions, replaced those originals.

The idea behind a Michelson interferometer is that a source of light can be split into two by passing it through a device like a beam splitter, which sends half of the original light down each of two perpendicular paths. At the end of the path, a mirror bounces the light back toward the way it came from, and then those two beams are recombined, producing an interference pattern (or a null pattern, if the interference is 100% destructive) on the screen. If the speed of light is different down each path, or if the path length is different, then the interference pattern will change in response.
Credit: Polytec GmbH/Wikimedia Commons

In the 1920s, atomic interferometry — based on light’s wavelength — superseded the “bar” standard.

Two diagrams show a light source, beam-splitter, mirrors, and a screen in an interferometer experiment with arms each one meter long, comparing interference patterns with stationary and moving mirrors.

This illustration shows how destructive interference (left) and constructive interference (right) determine what sort of interference pattern arises on the screen (at bottom, both panels) in an interferometer setup. By tuning the interferometer’s distance to a specific number of wavelengths of light, a quantity such as a “meter” can be either measured or even defined.
Credit: S. Kelley/NIST

The right number of wavelengths of light defined the 20th century’s meter.

A man in a suit operates scientific equipment in front of a large circular interference pattern backdrop, using an instrument that appears to be one meter long, suggesting an experiment related to wave physics or optics.

NIST’s William Meggers, shown here in March 1951, demonstrates a measurement of the wavelength of mercury-198, which he proposed could be used to define the meter. By defining the meter as a precise number of wavelengths emitted by a well-known atomic transition, a superior precision could be achieved compared to any “standardized” physical object, such as the classic meter bar.
Credit: NIST

First cadmium, then mercury, and next krypton atoms defined the meter.

Left: Krypton’s electronic configuration diagram with energy levels and orbitals. Right: Bohr model showing krypton’s electron shells with electrons as orange dots, all arranged along a scale one meter long for visual clarity.

In 1960, after years of experiments with cadmium, krypton, and mercury, the 11th Conférence Général des Poids et Mésures redefined the meter as the “length equal to 1,650,763.73 wavelengths in vacuum of the radiation corresponding to the transition between the levels 2p10 and 5d5 of the krypton-86 atom,” a definition which stood until 1983.
Credit: WebElements, Mark Winter / The University of Sheffield

Finally, in 1983, a new standard was adopted: the distance light travels in 1/299,792,458th of a second.

light electromagnetic wave field animation

Although light is an electromagnetic wave with in-phase oscillating electric and magnetic fields perpendicular to the direction of light’s propagation, the speed of light is wavelength-independent: 299,792,458 m/s in a vacuum. If you can measure the distance that light of any wavelength travels in 1/299,792,458th of a second, you can precisely measure and know how long “1 meter” is from anywhere in the Universe.

Credit: And1mu/Wikimedia Commons

Because the speed of light in a vacuum is always constant, this definition is universal.

two particles different wavelength speed of light

The longer a photon’s wavelength is, the lower in energy it is. But all photons, regardless of wavelength/energy, move at the same speed: the speed of light. This is, surprisingly, irrespective of the motion of the observer relative to light; the speed of all forms of light is measured to always be the same for all observers.
Credit: NASA/Sonoma State University/Aurore Simonnet

Mostly Mute Monday tells a scientific story in images, visuals, and no more than 200 words.

This article How the “meter” came to be exactly one meter long is featured on Big Think.

Read the whole story
mrmarchant
12 hours ago
reply
Share this story
Delete

Sweden Went All in on Screens in Childhood. Now It’s Pulling the Plug.

1 Share

Introduction from Zach Rausch:

One of the most surprising patterns in the global mental health data has been the sharp decline in adolescent mental health in the region often said to be the happiest in the world: Scandinavia. These countries have strong social safety nets and a population that embraces free play and childhood independence. However, many Scandinavian parents, educators, and politicians transferred that mentality to the virtual world, eagerly adopting new digital technologies for children, and allowing kids unfettered access to roam free online.

Just like in the Anglosphere, screens, smartphones and social media reshaped adolescence in the Nordic countries starting around 2012, and a decline in teen mental health quickly followed, as we showed in our post on mental health trends in Finland, Iceland, Denmark, Norway, and Sweden.

What’s notable about Sweden isn’t the effect that the phone-based childhood had on its kids — it’s how they’re responding and changing course.

Today’s post is written by Linda Åkeson McGurk, a Swedish-American journalist, speaker, and bestselling author of There’s No Such Thing as Bad Weather and The Open-Air Life, two books that have inspired readers all over the world to embrace nature as a way of life. Linda has spent over a decade advocating for children’s right to outdoor play, and helping families trade screen time for green time, including through her Substack, The Open-Air Life.

In this essay, she gives us an inside look at how Sweden first embraced, and then reset, its relationship with smartphones, social media, and other emerging digital technologies. Sweden is a small and nimble nation with a history of experimentation with social and educational policies — so they are able to make sharp changes in short periods. That makes it one of the most important countries to watch. If their new policies work, including rolling out bell-to-bell phone-free schools, reintroducing textbooks, and pulling back on screens in schools, the world will have a powerful example of how to roll back the phone-based childhood and restore the play-based childhood. Go Sweden!

— Zach

Subscribe now


Sweden Went All in on Screens in Childhood. Now It’s Pulling the Plug.

By Linda McGurk

Source: Shutterstock

Sweden isn’t just the land of fika, flat-pack furniture, and the Nobel Prize — it’s also one of the most tech-forward countries on Earth. Spotify, Minecraft, Candy Crush, and the famous YouTuber PewDiePie, who has more than 110 million followers, all hail from here. Broadband is universal, Wi-Fi is lightning-fast, and no Swede wants to be accused of being a late adopter.

So, when I moved back to my native Sweden in 2018 after spending 15 years in the U.S., I knew I was returning to a digital wonderland. Even so, I was stunned to see how thoroughly digital devices had infiltrated every corner of daily life — especially childhood. At the time, a quarter of Swedish babies under 12 months (!) were using the internet. Two-thirds of 9-year-olds had cellphones. And 97% of all 12- to 15-year-olds were on Snapchat.

Even though the official age limit for Snapchat is 13, the girls in my daughter’s fifth-grade class were no exception, and soon my daughter was the only one left out of the class Snapchat group. When I tried to raise the issue with other parents, I was met with blank stares. Soon, I felt like I was the only parent in Sweden trying to limit my children’s screen time and access to social media. Anyone who questioned the nation’s blind faith in digital childhood was treated like a moral panic peddler and cultural reactionary, not unlike those who once claimed that jazz music was the work of the devil.

Yet in recent years, the country has awoken to the risks of rapid digitalization and a childhood saturated in screentime. This top-to-bottom reversal can provide a crucial blueprint for other tech-smitten nations struggling to balance the digital age with children’s well-being.

Share

The Nature-Based Childhood Versus the Digital Blindspot

In her book The Danish Secret to Happy Kids, author Helen Russell argues that because Nordic countries’ social codex traditionally gave children lots of freedom to play outdoors (a positive), they misguidedly granted children the same wide-ranging freedom online. Until very recently, Sweden’s Public Health Institute had no screen-time guidelines. Not even the Swedish Agency for the Media — whose primary task is to protect minors from harmful media use and increase media literacy in the general population — had any advice for parents. Russell calls this lack of awareness of the effects of excessive smartphone and social media use a “digital blindspot.”

To my surprise, digital devices had even made their way into the Swedish universal preschool system, which is internationally renowned for its focus on child-led outdoor play year-round. In 2019, the government went a step further by mandating the use of digital tools in the national preschool curriculum. The fear that Sweden’s 1- to 5-year-olds might fall behind in the race toward an AI-powered future was palpable. And with that, screens were challenging one of the most quintessentially Scandinavian things: the nature-based childhood.

Meanwhile, the public school system seemed woefully unprepared to deal with the consequences of the fast and furious digital revolution. By the time my oldest daughter entered eighth grade in 2021, phones went everywhere with kids, even into classrooms. Just like in the U.S., the result was chaos. Almost daily, my daughter told me about students texting, playing games, scrolling on social media, snapping selfies, and taking calls during lessons. Teachers had been demoted to smartphone police, trying to teach algebra to kids hooked on Roblox. Distraction was the new normal – and it showed in school results.

In the 2022 international PISA assessment, Swedish 15-year-olds recorded their lowest scores in math and reading in a decade, with more than a quarter of the students falling into the low-performing category in math. When the Swedish National Agency for Education analyzed the results, they concluded that the students with the highest digital media use for things other than learning, both at school and at home, performed the worst. An OECD report went even further, noting that nearly 4 in 10 Swedish students Swedish students are distracted by digital devices in math class. Teachers saw the same trend from the front lines. Nearly 9 in 10 said that smartphones were harming students’ learning, stamina and attention spans.

When I asked my daughter’s teacher why phones weren’t collected at the start of the day, he sighed: “We try to limit phones in the classroom with the younger kids, but the eighth and ninth graders are kind of set in their ways. It is what it is.”

I couldn’t believe it. How could a country that had always taken pride in its progressive policies on children’s well-being lack awareness and concern about how near-constant digital stimulation might affect learning and adolescent brain development?

Subscribe now

The Turning Point

But after years of digital free fall, the tide in Sweden began to turn. The first signs of a reversal came in late 2022, when then-Minister of Schools Lotta Edholm called the digitalization of Swedish schools “an experiment” that wasn’t scientifically based and that harmed children’s learning. Then, the Public Health Institute made a course correction by issuing Sweden’s first-ever screen time guidelines in 2024. Their recommendations were both clear and pragmatic:

  • Ages 0–2: Ideally, no screen time at all, aside from video calls with family.

  • Ages 2–5: No more than one hour per day, with content tailored to the child’s age and developmental stage.

  • Ages 6–12: One to two hours per day, with parents encouraged to stay involved—know what your child is watching or playing, make sure age limits are respected, and talk about what happens online.

  • Ages 13–18: Two to three hours per day, while paying attention to how screen use affects well-being. Parents are urged to take an active interest in their teens’ digital lives and to help them find a healthy balance between online and offline activities.

The guidelines also warn about algorithm-driven apps that are addictive by design and can lead to problematic use — especially among younger children.

No More Phones in Schools

At the same time, Sweden tightened its school laws. Phones are now banned from classrooms unless specifically needed, and starting in 2026, a nationwide school phone ban will take effect for the full school day. The government isn’t stopping there. They’re also backing away from screens in schools in general, instead increasing funding for physical textbooks and school libraries.

“We’re reintroducing books, pencils, and paper as the default tools for learning in the classroom,” said then-Minister of Education, Johan Pehrson during an online summit on ed-tech. While Pehrson said he initially received pushback and was accused of being “old-fashioned,” the consensus has shifted, and the government now has broad public support for their agenda to reduce screen time in schools.

That’s not all. Citing concerns about sleep deprivation, worsening mental health, and plummeting school results among teenagers — and encouraged by the example set by Australia — the Swedish government is now considering introducing an age limit on social media.

Parents Push Back

Change didn’t come from the top alone. Across Sweden, parents are starting to rebel against Big Tech’s grip on childhood.

In the small town of Viken, two mothers launched a pact to delay smartphones until age 14. Within weeks, 250 people — about 5% of the town’s population — had signed on. In a different part of the country, parents created a similar pact to protect children against screen addiction and harmful social media content. Both initiatives were inspired by the newly founded organization Ki-DS, which aims to change the norms around smartphones based on the four principles of The Anxious Generation: No smartphones before 14, no social media before 16, phone-free schools and more independence and free play in the real world. So far, parents in over 200 Swedish towns have signed the pledge.

Prominent influencers, such as tech entrepreneur and investor Sara Wimmercranz, have helped change the conversation around screens as well. Wimmercranz made headlines when she went public about limiting screen time for her four children to Saturdays and keeping the family’s summer vacations screen free. Now, going smartphone free or completely screen free during the summer holiday is trending.

The evidence of a turnaround is more than anecdotal. When the Swedish Agency for the Media published their latest report on children’s media use in September 2025, it showed:

  • Among 9-12-year-olds, the average daily use of digital devices has decreased by 40 minutes per day since 2022. The use is also decreasing among children in other age groups.

  • The share of 9-year-olds that don’t have a cell phone has almost doubled since 2022, a trend that is also apparent among 0–8-year-olds.

  • Social media use is decreasing among children under the age of 13.

  • Parents’ concern about children’s digital media use is increasing, especially among parents of young children.

Another telltale sign of the times is that one of Sweden’s largest electronics chains reported that the sales of “dumb phones” tripled from 2022 to 2024.

Leave a comment

What Sweden’s Example Can Teach Other Nations

The other day, I received a brochure in the mailbox titled How do you talk about screens at home? It was from the Public Health Institute of Sweden and was packed with tips and information about screen use and what families can do to encourage healthy habits. It was exactly the kind of support I’d wished for seven years earlier, when my children were still young and we had just made the move across the Atlantic.

Sweden’s story is one of a society that looked at itself in the mirror and changed course. The lessons from the Swedish experience are simple but profound:

  • Policy matters. National bans on phones in schools ensure an environment conducive to learning and connection, allow students to engage more fully, and take pressure off of teachers who previously had to police device usage.

  • Official guidance empowers parents. Clear, research-based recommendations make it easier to set, justify, and enforce limits.

  • Community action works. Parents joining forces — village by village — can shift cultural norms faster than we think.

Once a country that prided itself on being the fastest to digitalize, Sweden is now proving that progress sometimes means knowing when you’ve taken a wrong turn, so you can double back and undo the mistake. As my daughter turns 18, it’s too soon to know exactly how Sweden’s radical digital experiment has shaped her generation, but the course reversal makes me hopeful about the future for Sweden’s children and — if other countries follow Sweden’s lead — for those around the world.

Subscribe now

Read the whole story
mrmarchant
12 hours ago
reply
Share this story
Delete

memories of .us

1 Share

How much do you remember from elementary school? I remember vinyl tile floors, the playground, the teacher sentencing me to standing in the hallway. I had a teacher who was a chess fanatic; he painted a huge chess board in the paved schoolyard and got someone to fabricate big wooden chess pieces. It was enough of an event to get us on the evening news. I remember Run for the Arts, where I tried to talk people into donating money on the theory that I could run, which I could not. I'm about six months into trying to change that and I'm good for a mediocre 5k now, but I don't think that's going to shift the balance on K-12 art funding.

I also remember a domain name: bridger.pps.k12.or.us

I have quipped before that computer science is a field mostly concerned with assigning numbers to things, which is true, but it only takes us so far. Computer scientists also like to organize those numbers into structures, and one of their favorites has always been the tree. The development of wide-area computer networking surfaced a whole set of problems around naming or addressing computer systems that belong to organizations. A wide-area network consists of a set of institutions that manage their own affairs. Each of those institutions may be made up of departments that manage their own affairs. A tree seemed a natural fit. Even the "low level" IP addresses, in the days of "classful" addressing, were a straightforward hierarchy: each dot separated a different level of the tree, a different step in an organizational hierarchy.

The first large computer networks, including those that would become the Internet, initially relied on manually building lists of machines by name. By the time the Domain Name System was developed, this had already become cumbersome. The rapid growth of the internet was hard to keep up with, and besides, why did any one central entity---Jon Postel or whoever---even care about the names of all of the computers at Georgia Tech? Like IP addressing, DNS was designed as a hierarchy with delegated control. A registrant obtains a name in the hierarchy, say gatech.edu, and everything "under" that name is within the control, and responsibility, of the registrant. This arrangement is convenient for both the DNS administrator, which was a single organization even after the days of Postel, and for registrants.

We still use the same approach today... mostly. The meanings of levels of the hierarchy have ossified. Technically speaking, the top of the DNS tree, the DNS root, is a null label referenced by a trailing dot. It's analogous to the '/' at the beginning of POSIX file paths. "gatech.edu" really should be written as "gatech.edu." to make it absolute rather than relative, but since resolution of relative URLs almost always recurses to the top of the tree, the trailing dot is "optional" enough that it is now almost always omitted. The analogy to POSIX file paths raises an interesting point: domain names are backwards. The 'root' is at the end, rather than at the beginning, or in other words, they run from least significant to most significant, rather than most significant to least significant. That's just... one of those things, you know? In the early days one wasn't obviously better than the other, people wrote hierarchies out both ways, and as the dust settled the left-to-right convention mostly prevailed but right-to-left hung around in some protocols. If you've ever dealt with endianness, this is just one of those things about computers that you have to accept: we cannot agree on which way around to write things.

Anyway, the analogy to file paths also illustrates the way that DNS has ossified. The highest "real" or non-root component of a domain name is called the top-level domain or TLD, while the component below it is called a second-level domain. In the US, it was long the case that top-level domains were fixed while second-level domains were available for registration. There have always been exceptions in other countries and our modern proliferation of TLDs has changed this somewhat, but it's still pretty much true. When you look at "gatech.edu" you know that "edu" is just a fixed name in the hierarchy, used to organize domain names by organization type, while "gatech" is a name that belongs to a registrant.

Under the second-level name, things get a little vague. We are all familiar with the third-level name "www," which emerged as a convention for web servers and became a practical requirement. Web servers having the name "www" under an organization's domain was such a norm for so many years that hosting a webpage directly at a second-level name came to be called a "naked domain" and had some caveats and complications.

Other than www, though, there are few to no standards for the use of third-level and below names. Larger organizations are more likely to use third-level names for departments, infrastructure operators often have complex hierarchies of names for their equipment, and enterprises the world 'round name their load-balanced webservers "www2," "www3" and up. If you think about it, this situation seems like kind of a failure of the original concept of DNS... we do use the hierarchy, but for the most part it is not intended for human consumption. Users are only expected to remember two names, one of which is a TLD that comes from a relatively constrained set.

The issue is more interesting when we consider geography. For a very long time, TLDs have been split into two categories: global TLDs, or gTLDs, and country-code TLDs, or ccTLDs. ccTLDs reflect the ISO country codes of each country, and are intended for use by those countries, while gTLDs are arbitrary and reflect the fact that DNS was designed in the US. The ".gov" gTLD, for example, is for use by the US government, while the UK is stuck with ".gov.uk". This does seem unfair but it's now very much cemented into the system: for the large part, US entities use gTLDs, while entities in other countries use names under their respective ccTLDs. The ".us" ccTLD exists just as much as all the others, but is obscure enough that my choice to put my personal website under .us (not an ideological decision but simply a result of where a nice form of my name was available) sometimes gets my email address rejected.

Also, a common typo for ".us" is ".su" and that's geopolitically amusing. .su is of course the ccTLD for the Soviet Union, which no longer exists, but the ccTLD lives on in a limited way because it became Structurally Important and difficult to remove, as names and addresses tend to do.

We can easily imagine a world where this historical injustice had been fixed: as the internet became more global, all of our US institutions could have moved under the .us ccTLD. In fact, why not go further? Geographers have long organized political boundaries into a hierarchy. The US is made up of states, each of which has been assigned a two-letter code by the federal government. We have ".us", why not "nm.us"?

The answer, of course, is that we do.

In the modern DNS, all TLDs have been delegated to an organization who administers them. The .us TLD is rightfully administered by the National Telecommunications and Information Administration, on the same basis by which all ccTLDs are delegated to their respective national governments. Being the US government, NTIA has naturally privatized the function through a contract to telecom-industrial-complex giant Neustar. Being a US company, Neustar restructured and sold its DNS-related business to GoDaddy. Being a US company, GoDaddy rose to prominence on the back of infamously tasteless television commercials, and its subsidiary Registry Services LLC now operates our nation's corner of the DNS.

But that's the present---around here, we avoid discussing the present so as to hold crushing depression at bay. Let's turn our minds to June 1993, and the publication of RFC 1480 "The US Domain." To wit:

Even though the original intention was that any educational institution anywhere in the world could be registered under the EDU domain, in practice, it has turned out with few exceptions, only those in the United States have registered under EDU, similarly with COM (for commercial). In other countries, everything is registered under the 2-letter country code, often with some subdivision. For example, in Korea (KR) the second level names are AC for academic community, CO for commercial, GO for government, and RE for research. However, each country may go its own way about organizing its domain, and many have.

Oh, so let's sort it out!

There are no current plans of putting all of the organizational domains EDU, GOV, COM, etc., under US. These name tokens are not used in the US Domain to avoid confusion.

Oh. Oh well.

Currently, only four year colleges and universities are being registered in the EDU domain. All other schools are being registered in the US Domain.

Huh?

RFC 1480 is a very interesting read. It makes passing references to so many facets of DNS history that could easily be their own articles. It also defines a strict, geography-based hierarchy for the .us domain that is a completely different universe from the one in which we now live. For example, we learned above that, in 1993, only four-year institutions were being placed under .edu. What about the community colleges? Well, RFC 1480 has an answer. Central New Mexico Community College would, of course, fall under cnm.cc.nm.us. Well, actually, in 1993 it was called the Technical-Vocational Institute, so it would have been tvi.tec.nm.us. That's right, the RFC describes both "cc" for community colleges and "tec" for technical institutes.

Even more surprising, it describes placing entities under a "locality" such as a city. The examples of localities given are "berkeley.ca.us" and "portland.wa.us", the latter of which betrays an ironic geographical confusion. It then specifies "ci" for city and "co" for county, meaning that the city government of our notional Portland, Washington would be ci.portland.wa.us. Agencies could go under the city government component (the RFC gives the example "Fire-Dept.CI.Los-Angeles.CA.US") while private businesses could be placed directly under the city (e.g. "IBM.Amonk.NY.US"). The examples here reinforce that the idea itself is different from how we use DNS today: The DNS of RFC 1480 is far more hierarchical and far more focused on full names, without abbreviations.

Of course, the concept is not limited to local government. RFC 1480 describes "fed.us" as a suffix for the federal government (the example "dod.fed.us" illustrates that this has not at all happened), and even "General Independent Entities" and "Distributed National Institutes" for those trickier cases.

We can draw a few lessons from how this proposal compares to our modern day. Back in the 1990s, .gov was limited to the federal government. The thinking was that all government agencies would move into .us, where the hierarchical structure made it easier to delegate management of state and locality subtrees. What actually happened was the opposite: the .us thing never really caught on, and a more straightforward and automated management process made .gov available to state and local governments. The tree has effectively been flattened.

That's not to say that none of these hierarchical names ever caught on. GoDaddy continues to maintain what they call the "usTLD Locality-Based Structure". At the decision of the relevant level of the hierarchy (e.g. a state), locality-based subdomains of .us can either be delegated to the state or municipality to operate, or operated by GoDaddy itself as the "Delegated Manager." The latter arrangement is far more common, and it's going to stay that way: RFC 1480 names are not dead, but they are on life support. GoDaddy's contract allows them to stop onboarding any additional delegated managers, and they have.

Few of these locality-based names found wide use, and there are even fewer today. Multnomah County Library once used "multnomah.lib.or.us," which I believe was actually the very first "library" domain name registered. It now silently redirects to "multcolib.org", which we could consider a graceful name only in that the spelling of "Multnomah" is probably not intuitive to those not from the region. As far as I can tell, the University of Oregon and OGI (part of OHSU) were keeping very close tabs on the goings-on of academic DNS, as Oregon entities are conspicuously over-represented in the very early days of RFC 1480 names---behind only California, although Georgia Tech and Trent Heim of former Colorado company XOR both give their respective states a run for the money.

"co.bergen.nj.us" works, but just gets you a redirect notice page to bergencountynj.gov. It's interesting that this name is actually longer than the RFC 1480 name, but I think most people would agree that bergencountynj.gov is easier to remember. Some of that just comes down to habit, we all know ".gov", but some of it is more fundamental. I don't think that people often understand the hierarchical structure of DNS, at least not intuitively, and that makes "deeply hierarchical" (as GoDaddy calls them) names confusing.

Certainly the RFC 1480 names for school districts produced complaints. They were also by far the most widely adopted. You can pick and choose examples of libraries (.lib..us) and municipal governments that have used RFC 1480 names, but school districts are another world: most school districts that existed at the time have a legacy of using RFC 1480 naming. As one of its many interesting asides, RFC 1480 explains why: the practice of putting school districts under .k12..us actually predates RFC 1480. Indeed, the RFC seems to have been written in part to formalize the existing practice. The idea of the k12..us hierarchy originated within IANA in consultation with InterNIC (newly created at the time) and the Federal Networking Council, a now-defunct advisory committee of federal agencies that made a number of important early decisions about internet architecture.

RFC 1480 is actually a revision on the slightly older RFC 1386, which instead of saying that schools were already using the k12 domains, says that "there ought to be a consistent scheme for naming them." It then says that the k12 branch has been "introduced" for that purpose. RFC 1386 is mostly silent on topics other than schools, so I think it was written mostly to document the decision made about schools with other details about the use of locality-based domains left sketchy until the more thorough RFC 1480.

The decision to place "k12" under the state rather than under a municipality or county might seem odd, but the RFC gives a reason. It's not unusual for school districts, even those named after a municipality, to cover a larger area than the municipality itself. Albuquerque Public Schools operates schools in the East Mountains; Portland Public Schools operates schools across multiple counties and beyond city limits. Actually the RFC gives exactly that second one as an example:

For example, the Portland school district in Oregon, is in three or four counties. Each of those counties also has non-Portland districts.

I include that quote mostly because I think it's funny that the authors now know what state Portland is in. When you hear "DNS" you think Jon Postel, at least if you're me, but RFC 1480 was written by Postel along with a less familiar name, Ann Westine Cooper. Cooper was a coworker of Postel at USC, and RFC 1480 very matter-of-factly names the duo of Postel and Cooper as the administrator of the .US TLD. That's interesting considering that almost five years later Postel would become involved in a notable conflict with the federal government over control of DNS---one of the events that precipitated today's eccentric model of public-private DNS governance.

There are other corners of the RFC 1480 scheme that were not contemplated in 1993, and have managed to outlive many of the names that were. Consider, for example, our indigenous nations: these are an exception to the normal political hierarchy of the US. The Navajo Nation, for example, exists in a state that is often described as parallel to a state, but isn't really. Native nations are sovereign, but are also subject to federal law by statute, and subject to state law by various combinations of statute, jurisprudence, and bilateral agreement. I didn't really give any detail there and I probably still got something wrong, such is the complicated legal history and present of Native America. So where would a native sovereign government put their website? They don't fall under the traditional realm of .gov, federal government, nor do they fall under a state-based hierarchy. Well, naturally, the Navajo Nation is found at navajo-nsn.gov.

We can follow the "navajo" part but the "nsn" is odd, unless they spelled "nation" wrong and then abbreviated it, which I've always thought is what it looks like on first glance. No, this domain name is very much an artifact of history. When the problem of sovereign nations came to Postel and Cooper, the solution they adopted was a new affinity group, like "fed" and "k12" and "lib": "nsn", standing for Native Sovereign Nation. Despite being a late comer, nsn.us probably has the most enduring use of any part of the RFC 1480 concept. Dozens of pueblos, tribes, bands, and confederations still use it. squamishtribe.nsn.us, muckleshoot.nsn.us, ctsi.nsn.us, sandiapueblo.nsn.us.

Yet others have moved away... in a curiously "partial" fashion. navajo-nsn.gov as we have seen, but an even more interesting puzzler is tataviam-nsn.us. It's only one character away from a "standardized" NSN affinity group locality domain, but it's so far away. As best I can tell, most of these governments initially adopted "nsn.us" names, which cemented the use of "nsn" in a similar way to "state" or "city" as they appear in many .gov domains to this day. Policies on .gov registration may be a factor as well, the policies around acceptable .gov names seem to have gone through a long period of informality and then changed a number of times. Without having researched it too deeply, I have seen bits and pieces that make me think that at various points NTIA has preferred that .gov domains for non-federal agencies have some kind of qualifier to indicate their "level" in the political hierarchy. In any case, it's a very interesting situation because "native sovereign nation" is not otherwise a common term in US government. It's not like lawyers or lawmakers broadly refer to tribal governments as NSNs, the term is pretty much unique to the domain names.

So what ever happened to locality-based names? RFC 1480 names have fallen out of favor to such an extent as to be considered legacy by many of their users. Most Americans are probably not aware of this name hierarchy at all, despite it ostensibly being the unified approach for this country. In short, it failed to take off, and those sectors that had widely adopted it (such as schools) have since moved away. But why?

As usual, there seem to be a few reasons. The first is user-friendliness. This is, of course, a matter of opinion---but anecdotally, many people seem to find deeply hierarchical domain names confusing. This may be a self-fulfilling prophecy, since the perception that multi-part DNS names are user-hostile means that no one uses them which means that no users are familiar with them. Maybe, in a different world, we could have broken out of that loop. I'm not convinced, though. In RFC 1480, Postel and Cooper argue that a deeper hierarchy is valuable because it allows for more entities to have their "obviously correct" names. That does make sense to me, splitting the tree up into more branches means that there is less name contention within each branch. But, well, I think it might be the kind of logic that is intuitive only those who work in computing. For the general public, I think long multi-part names quickly become difficult to remember and difficult to type. When you consider the dollar amounts that private companies have put into dictionary word domain names, it's no surprise that government agencies tend to prefer one-level names with full words and simple abbreviations.

I also think that the technology outpaced the need that RFC 1480 was intended to address. The RFC makes it very clear that Postel and Cooper were concerned about the growing size of the internet, and expected the sheer number of organizations going online to make maintenance of the DNS impractical. They correctly predicted the explosion of hosts, but not the corresponding expansion of the DNS bureaucracy. Between the two versions of the .us RFC, DNS operations were contracted to Network Solutions. This began a winding path that lead to delegation of DNS zones to various private organizations, most of which fully automated registration and delegation and then federated it via a common provisioning protocol. The size of, say, the .com zone really did expand beyond what DNS's designers had originally anticipated... but it pretty much worked out okay. The mechanics of DNS's maturation probably had a specifically negative effect on adoption of .us, since it was often under a different operator from the "major" domain names and not all "registrars" initially had access.

Besides, the federal government never seems to have been all that on board with the concept. RFC 1480 could be viewed as a casualty of the DNS wars, a largely unexplored path on the branch of DNS futures that involved IANA becoming completely independent of the federal government. That didn't happen. Instead, in 2003 .gov registration was formally opened to municipal, state, and tribal governments. It became federal policy to encourage use of .gov for trust reasons (DNSSEC has only furthered this), and .us began to fall by the wayside.

That's not to say that RFC 1480 names have ever gone away. You can still find many of them in use. state.nm.us doesn't have an A record, but governor.state.nm.us and a bunch of other examples under it do. The internet is littered with these locality-based names, many of them hiding out in smaller agencies and legacy systems. Names are hard to get right, and one of the reasons is that they're very hard to get rid of.

When things are bigger, names have to be longer. There is an argument that with only 8-character names, and in each position allow a-z, 0-9, and -, you get 37**8 = 3,512,479,453,921 or 3.5 trillion possible names. It is a great argument, but how many of us want names like "xs4gp-7q". It is like license plate numbers, sure some people get the name they want on a vanity plate, but a lot more people who want something specific on a vanity plate can't get it because someone else got it first. Structure and longer names also let more people get their "obviously right" name.

You look at Reddit these days and see all these usernames that are two random words and four random numbers, and you see that Postel and Cooper were right. Flat namespaces create a problem, names must either be complex or long, and people don't like it either. What I think they got wrong, at a usability level, is that deep hierarchies still create names that are complex and long. It's a kind of complexity that computer scientists are more comfortable with, but that's little reassurance when you're starting down the barrel of "bridger.pps.k12.or.us".

Read the whole story
mrmarchant
12 hours ago
reply
Share this story
Delete

The Mac calculator’s original design came from letting Steve Jobs play with menus for ten minutes

1 Share

In February 1982, Apple employee #8 Chris Espinosa faced a problem that would feel familiar to anyone who has ever had a micromanaging boss: Steve Jobs wouldn’t stop critiquing his calculator design for the Mac. After days of revision cycles, the 21-year-old programmer found an elegant solution: He built what he called the “Steve Jobs Roll Your Own Calculator Construction Set” and let Jobs design it himself.

This delightful true story comes from Andy Hertzfeld’s Folklore.org, a legendary tech history site that chronicles the development of the original Macintosh, which was released in January 1984. I ran across the story again recently and thought it was worth sharing as a fun anecdote in an age where influential software designs often come by committee.

Design by menu

Chris Espinosa started working for Apple at age 14 in 1976 as the company’s youngest employee. By 1981, while studying at UC Berkeley, Jobs convinced Espinosa to drop out and work on the Mac team full time.

Read full article

Comments



Read the whole story
mrmarchant
12 hours ago
reply
Share this story
Delete

DOCTYPE Magazine

1 Share
Matt Round made a 1980s-style type-in print magazine, full of novel standalone web apps you type in by hand #
Read the whole story
mrmarchant
21 hours ago
reply
Share this story
Delete
Next Page of Stories