1505 stories
·
2 followers

The Do Now: Active Ingredients

1 Share

I wrote a post a year ago about my “Do Now” routine — the first thing students do when they enter my class. It’s one of the most popular posts I’ve written, which I’ve always found funny. My routine is pretty boring and straightforward. Nothing flashy about it. Today I’m sharing an update. My routine is mostly the same, so my focus for this post is what I call the “active ingredients” in the Do Now — the features of the routine that I find most important. There are a few fun little tweaks from last year, though!

The Least Useful Instructional Time in a Lesson

Before getting into the routine itself, one important piece of framing. In my opinion, the Do Now is the least useful instructional time in a lesson. I have students coming in at different times. I need to take attendance. Two students don’t have pencils. Another student has an urgent issue they need to tell me about. I get a message from the front office asking me to send a student up to see them.

In every other part of my lesson, I can frame the activity for students. I can explain why we are doing something and what I want them to learn. I can actively monitor while students work. During the Do Now, those things are a bit harder. This might be different for you! If you work at a school where all students come in at the same time and you can reliably give 100% of your attention to learning from the moment they arrive to your class, great. That sounds awesome. That’s not the case where I work, and what I’m describing is true for most schools.

I don’t mean to say that the Do Now isn’t important. An effective Do Now routine is great for class culture. It sets the tone for the rest of class, and communicates to students that, in this class, we get right to work because we have a lot to learn.

I think the most common mistake teachers make with a Do Now is trying to do too much. It’s tempting to create a warmup activity that previews what students are about to learn, getting right to the heart of the lesson. I think that’s often a mistake. If you wait until every student is present, and attendance is taken, and all the little wrinkles are ironed out, you’ve wasted time and lost momentum. If you try to get students working right away, you can’t frame the activity smoothly and there’s a risk of confusing students. If you can’t give your full attention to learning, all those ambitions might be wasted.

The Routine

Here’s the routine. It’s dead simple.

Students walk into my room and pick up a half-sheet handout that looks like this:1

Then they sit down at their desks. I have a set of five problems projected at the front of the room. Here was a Do Now from last week:

Students answer the five questions on their handout. That’s it.

If you’d like to try this, here is a template for the handout, and here is a template for projecting the problems.

Active Ingredients

Here are what I’ll call the “active ingredients” in this routine — the elements that I think play the largest role in making the routine successful.

  • I spend very little time prepping the activity. It takes me one minute to write the questions each day, and I copy and paper-cut the half-sheets in bulk once every two weeks or so.

  • The routine is short. The Do Now is the least useful instructional time in a lesson. I aim for three minutes flat, from the start of class until I go over the questions. I quickly have students check their answers. Then we move on to the rest of the lesson. Short and sweet.

  • I occasionally grade the Do Now to send a message that solving these problems is valuable practice and I expect students to remember what they learn. I grade them more often at the beginning of the year when I prioritize helping students build good habits, and I grade them less and less as the year goes on.

  • All questions are retrieval practice on topics students have learned previously and are relatively straightforward. There will be time for more complex questions later when I have the chance to provide more scaffolding and support.

  • I avoid putting more than one question on the Do Now that I’m uncertain students will get right. If half the class is getting two or three or four questions wrong, I screwed up. First, that’s not helping to build confidence at the start of class. Second, that’s too many questions to address effectively.

  • I actively monitor (as much as possible while also taking attendance and dealing with whatever else I need to deal with). I circulate around the room once when the three minutes are almost up.

  • When I circulate around the room, I pick the question I think students are most likely to struggle with and look at every student’s paper to see their answers. I find I can reliably do this for one question. For more than one question, it takes too long and the data is too hard for me to keep track of.2

  • Based on that one question, I decide if we will follow up. If a significant number of students get it wrong, we talk a bit about the question and do some practice on mini whiteboards. I’ll then make sure to include that question for a few more days to see if students are getting it right or if I need to do a larger reteach.

  • I have a spreadsheet to track which skills to include. Before I used this spreadsheet, I would often forget to include old skills or repeat the same skills day after day. The spreadsheet automatically schedules retrieval practice for each skill, beginning with several days of retrieval in a row before spacing out practice at larger and larger intervals. Credit to Lee Wheeler for this idea.3

That’s it! That’s all there is to it. I want to emphasize again, this seems really simple and unambitious. That’s the point. I’m not trying to do too much. I’m giving students a bit of quick retrieval practice, a confidence boost at the start of class, and gathering data for me to act on. Three minutes flat. Maybe we spend two more minutes on a common mistake. Then we move on to the rest of the lesson.

1

I wrote in my post last year about how I also have daily number puzzles on my Do Nows. You can read in that post about how the puzzles work. I haven’t included them here because they’re a bit tedious to explain and I don’t think they’re an active ingredient in the Do Now. They’re fun, and I like them, but I only use them in my regular 7th grade math classes and not the extra-support math classes I also teach. The Do Now works just as well in those classes.

2

An idea from Craig Barton I have experimented with but don’t use consistently: once students complete the Do Now, have all students write their answers to the first problem on mini whiteboards and hold them up for me to see. Then the second problem, etc. This is a great way to get full-class data on every question. It does take additional time. If I’m doing a good job, there aren’t many mistakes so this data collection feels superfluous. Even if there are mistakes, I’m reluctant to go over multiple questions in detail, going over too many questions tends to muddle things and take time away from the rest of the lesson.

3

A few notes about the spacing spreadsheet. First, big thanks to Lee Wheeler for inspiring the idea. Lee’s is in Excel and I am terrible with Excel, so I wanted to make my own in Google Sheets. I use the Fibonacci sequence to space practice (1, 2, 3, 5, 8, 13, 21, etc., each number the sum of the last two). I do this in part because I’m a nerd and I love the Fibonacci sequence. Many spacing apps out there space practice by a much larger interval. I prefer the Fibonacci sequence because it starts with some repetition (days 1, 2, and 3) and doesn’t leave too much time between early retrievals. I think this is important for real classrooms. We have weekends and breaks interrupting retrieval practice, and a shorter interval helps to compensate. I’m also scheduling retrieval for an entire class and not one student, so erring on the side of more retrieval helps to accommodate absences, the range of student achievement, etc.

Read the whole story
mrmarchant
16 hours ago
reply
Share this story
Delete

AI use can fry your brain, HBR study finds

1 Share

A new study warns of the dangers of “brain fry” — a form of mental exhaustion linked to intensive AI use. The condition is described as mental fatigue that can occur when people use AI tools to an extent that exceeds their cognitive capacity. Symptoms can include mental fog, difficulty concentrating, slower decision-making, and sometimes headaches.

The concept is discussed in the Harvard Business Review (HBR), according to Axios. The study was conducted by researchers at Boston Consulting Group and the University of California, Riverside.

In a survey of 1,488 full-time employees in the US, 14% of participants who use AI at work said they had experienced this type of mental exhaustion. The phenomenon appears to be particularly common among early AI users and people who work with multiple AI tools simultaneously.

The researchers warn that the problem can have consequences in the workplace, including more mistakes, decision fatigue, and an increased desire to leave a job. At the same time, they emphasize that “brain fry” is not the same thing as burnout. The researchers noted that AI can be positive when the technology is used to automate routine tasks, though employers should be cautious about requiring excessive use of AI at work.



Read the whole story
mrmarchant
16 hours ago
reply
Share this story
Delete

The eye of the mathematician

1 Share

Black and white photo of a chalkboard filled with complex mathematical equations and diagrams.

Is mathematical beauty real? Or is it just a subjective, human ‘wow’ that is becoming redundant in an AI age?

- by Rita Ahmadi

Read on Aeon

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

The Evidence Crisis in Math Reform

1 Share

Rahim Nathwani is a technologist based in San Francisco, making businesses better with tech and AI. He believes almost everyone can and should learn more math. His 9-year-old son is on track to complete Algebra 1 before 5th grade.


There is a pattern in American math education reform. An organization claims that a new teaching approach produces dramatic gains. Schools and districts adopt it. Years later, when someone checks the evidence, the gains turn out to be overstated, the methodology turns out to be flawed, and the students who were promised a better education are worse off than before.

The organizations driving these reforms say they want equity. But when their evidence doesn’t hold up, the children who suffer most are the ones these reforms were supposed to help: kids from low-income families, whose parents can’t hire tutors to fill the gaps.

This pattern has played out at least three times with research associated with YouCubed, one of the most influential math education organizations in the country.

The Chart That Doesn’t Match the Data

YouCubed, a Stanford-based initiative led by Professor Jo Boaler, recently published a PDF claiming dramatic math score improvements in Healdsburg Unified School District in California. The chart showed a compelling story: scores starting very low, climbing steadily, ending impressively high. It’s the kind of result that gets shared in school board meetings and cited in policy decisions.

There was one problem: The numbers don’t match the public record.

California publishes its student test data openly. Anyone can look it up. For the most recent year, YouCubed’s chart and the state data roughly agree: about 75% of 5th graders at Healdsburg meet grade-level standards. But for the baseline year (the “before” picture that makes the improvement look dramatic), YouCubed’s chart shows approximately 24%. The state’s official data for the same district, year, and grade band shows approximately 44%.

Here’s the chart from the YouCubed PDF:

And here’s a recreation of the chart based on official government numbers:

That’s not a rounding difference. That’s a gap of 20 percentage points, and it runs in exactly the direction that makes the improvement story look better.

When you replace YouCubed’s baseline with the actual public data, the narrative changes completely. Scores were roughly flat for several years, then showed a recent uptick. That might be a legitimate result. But it is not the dramatic turnaround that YouCubed presented to educators and policymakers, and which Jo Boaler shared on X.

Maybe There’s an Innocent Explanation?

In education research circles, it’s common to see researchers exclude certain subgroups from their data. The most frequent justification is something like: “We excluded chronically absent students because they didn’t receive the full intervention.”

That’s a methodologically reasonable thing to do, as long as you say you’re doing it and explain why.

The YouCubed document makes no mention of any data exclusion. But let’s ask: could data exclusion explain the discrepancy?

Here’s the problem. If you exclude chronically absent students from a proficiency calculation, you would generally expect the remaining percentage to go up, not down. Students who miss a lot of school typically perform worse on tests. Take them out of the calculation, and the percentage of students who meet standards would likely increase, not decrease.

But YouCubed’s 2016–2017 figure (24%) is far below the state’s figure (44%). That means their exclusion would have had to somehow remove the higher-performing students from the calculation. That is the opposite of what “excluding chronically absent students” would do.

We cannot think of a legitimate methodological reason why a district’s proficiency rate would be nearly cut in half by a data exclusion. If there is one, Youcubed’s report should explain it. It doesn’t.

A Quick Explainer: Why Cohort Comparisons Matter

Even setting aside the data discrepancy, there is a deeper methodological problem with the chart — one that matters even if every number in it were correct.

The chart compares “5th graders in 2017” to “5th graders in 2023.” These are entirely different groups of children. The kids who were in 5th grade in 2017 are in their early 20s now. They are not the same kids who were tested in 2023.

Imagine you run a tutoring program. In Year 1, you tutor a class of students who happen to come from lower-income families and score an average of 60 on a test. In Year 5, a different group of students — from more affluent families, in a district that has added new staff — scores an average of 80. Can you claim your tutoring program caused the improvement? Of course not.

The right way to evaluate whether an educational intervention works is to track the same group of students over time and see how they progress. This is called a “cohort study.” It’s standard in medical research (think clinical trials), and it’s the gold standard in education research too.

We went ahead and pulled the cohort data from California’s public records. We tracked each group of students as they moved through the grades. What did that analysis show? Here it is:

It did not show consistent improvement across cohorts. The picture was much more mixed than the YouCubed chart suggests.

In fact, if we look at the most recent cohorts (the kids who entered 3rd grade in 2018 or later), we see that almost all cohorts (grades) have a downward trend: the chance that any given kid meets state standards in math goes down as they spend more years at Healdsburg school district.

This matters because the entire point of the YouCubed document is presumably to persuade: to show that the YouCubed approach works, and that other districts should adopt it. If the underlying analysis is methodologically flawed (and if the baseline data appears to be inaccurate) then that persuasion is built on sand.

The Study That Couldn’t Be Verified

This would be easier to set aside if it were an isolated mistake. It is not.

YouCubed’s influence traces back to Professor Boaler’s most celebrated research: the “Railside” study, published in 2008, which claimed that a reform-math approach at a California high school produced dramatic gains, especially for minority students. That study became enormously influential. It shaped curricula, frameworks, and professional development programs across the country.

In 2012, two Stanford mathematics professors and several colleagues published an extended critique. They raised serious concerns: the school had been misrepresented, the data couldn’t be independently verified, and the conclusions overstated what the evidence showed. Rather than release the underlying data for independent review, Professor Boaler characterized the critique as harassment.

The data was never made fully available. The study remains widely cited.

The Framework Built on Selective Evidence

In 2023, California adopted a new Mathematics Framework that drew heavily on Boaler’s ideas and YouCubed’s recommendations. The framework discouraged ability grouping, de-emphasized procedural fluency, and delayed access to advanced math courses — all in the name of equity.

More than 1,000 scientists and mathematicians signed an open letter objecting. Their concerns: the framework cited research selectively, dismissed evidence from high-performing countries, and would likely harm the students it claimed to help — particularly those aiming for STEM careers.

A Stanford mathematician who reviewed the framework’s citations in detail documented numerous cases where cited studies either didn’t support the claims being made or actively contradicted them. Misrepresented citations, like misrepresented data, are hard to attribute to innocent error when they consistently run in the same direction.

The Pattern

Three cases. In each one, research associated with the same organization claimed dramatic results. In each one, outside review found serious problems with the evidence. And in each one, the discrepancies ran in the same direction: making the reform approach look more effective than the data actually showed.

A single error is understandable. Research is hard, and mistakes happen. But when the mistakes consistently flatter the same conclusion, the most straightforward explanation is not bad luck.

This matters because education policy is downstream of evidence. When a district eliminates ability grouping, it cites research. When a state delays algebra, it cites research. When a school board adopts a new curriculum, it cites research. If that research is unreliable, every decision built on it is compromised.

Who Pays the Price

When a math reform fails, the cost is not distributed equally.

Families with resources respond to a weak math program the same way they respond to every institutional failure: they route around it. They hire a tutor. They enroll in a supplemental program. They move to a different district. And this last dynamic is perhaps the most destructive: declining public school enrollment means system-wide strain, fewer resources for schools and teachers, and eventual school closures. This phenomenon is showing up across the country, and it seems to be driven by wealthier, white, and asian families, especially those in deep-blue districts experiencing acute education dysfunction. Seattle has seen thousands of families leave public schools, and private schools in San Francisco are brimming with new applicants.

Families earning $200,000 a year have options and are evidently ready and willing to use them.

A family earning $40,000 does not. When school math instruction is the only math instruction a child receives, the quality of that instruction is the ceiling on that child’s opportunity. A low-income child whose school adopted a curriculum based on overstated evidence doesn’t get a do-over.

The cruelest irony is that these reforms are justified in the language of equity. The children they claim to champion are the ones with the fewest alternatives when the reforms don’t work.

What Should Be Different

None of this means that math education shouldn’t evolve, or that traditional approaches are beyond criticism. But it does mean that the organizations driving reform need to meet a basic evidentiary standard: the same standard we’d expect of a pharmaceutical company claiming its drug works.

That standard is straightforward. When you publish results, the underlying data should be available for independent review. When your chart shows a number, it should match the public record. When you cite a study, the study should actually support the claim you’re making. And when someone finds an error, the response should be a correction, not an accusation.

These are not hostile demands. They are the minimum that parents, teachers, and policymakers deserve before reshaping how children learn mathematics.

One bad chart would be forgivable. But what we’ve described here is a pattern: influential organizations are shaping how millions of children learn math based on research that hasn’t survived basic scrutiny, and the kids who pay the price are the same ones the reforms were supposed to help.

Your child’s math education deserves evidence that holds up when someone checks.


Related Articles

Join the Center for Educational Progress and receive all our content — and thanks to all our amazing paid subscribers for their support.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Noids

1 Share
Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

The Evolving Foundations of Math

1 Share
An exploration of how mathematicians are still renovating and rebuilding the core pillars of their field today.

The post The Evolving Foundations of Math first appeared on Quanta Magazine



Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete
Next Page of Stories