1730 stories
·
2 followers

She passed high school math with A’s and B’s. In college, she had to start over.

1 Share

Chalkbeat Ideas is a section featuring reported columns on the big ideas and debates shaping American schools. Sign up for the Ideas newsletter to follow our work.

Cecilia Lopez Alvarado was scrolling through Reddit one evening in her dorm room when she came across a thread about students at the University of California San Diego who struggled with basic math.

A report had warned of an alarming decline in students’ math skills at UCSD, a highly selective university. It drew international headlines because of what it seemed to say about the state of American education. Commentators blamed high school grade inflation, test-free college admissions, and even the students themselves.

Alvarado read these headlines with a growing sense of frustration. People didn’t understand the full story here, she thought. And she would know: Alvarado is a UCSD student who had to take remedial math at the school. She read Reddit comments about how students should have mastered these topics in high school. She wrote back in the comments that some schools — like hers, a high-poverty public high school in San Bernardino, California — don’t even offer calculus.

The consequences of Alvarado’s challenges in math have been significant. After taking the remedial course, she still fell short on a math exam, which covers high school topics like trigonometry and precalculus. Alvarado therefore couldn’t move on to calculus, which was required for her initial major, business economics. Because of this, she recently decided to pursue a degree in communications instead. She aspires to be an accountant and is minoring in accounting.

Students like Alvarado are at the center of a debate in American education. How do high schools ensure students graduate with sufficient math skills? Who should get access to the resources of an elite college education? What role do universities themselves have in helping students who are underprepared?

High school graduation photo of UCSD sophomore Cecilia Lopez Alvarado
High school graduation photo of UCSD sophomore Cecilia Lopez Alvarado

The perspectives of these students have gotten strikingly little attention. That’s why I wanted to speak to Alvarado, a 19-year-old sophomore. Her story does not offer entirely simple answers, but it’s worth hearing. She looks back in frustration at her high school education, where she believes teachers were too lenient. But she’s also convinced she’s benefited from the education she’s received and the peers she’s met at UCSD.

Our conversation has been edited for length and clarity.

What was your high school math experience like? How did you do in math classes?

I usually passed with A’s and B’s, but I feel like a lot of the information never really stuck with me, just because we were granted so many opportunities to redo exams and homework. It felt like as long as you retake the exam and get 100% it doesn’t matter if you really know what you’re doing or not.

Do you know why you were given so many opportunities?

I’m sure it’s because they wanted us to not have F’s and D’s on our transcripts. It was just wanting us to be able to move on to the next grade. It never really was to hold us accountable. Instead of being like, hey, you only get one retake, it was just, you can retake it as many times as you like, to get a grade that you’re comfortable with.

Did you feel like it would have been better if they held you accountable more?

I think so, because then you have to face the reality of not just your grades, but what you really know and what you’re really learning. You have to discipline yourself more to be like, hey, I need to start studying instead of doing not so well in class and then just retaking it at a later time.

What did you think about the big national backlash and all these articles about UCSD students like yourself?

I’m very involved on the UCSD social media communities like Reddit, Instagram, and when people are sharing the articles, a lot of it was very negative. A lot of people are saying these students shouldn’t be at the school, their spots should have gone to someone more qualified. It was kind of making me feel bad, like, hey, is something wrong with me because I can’t pass the exam?

I don’t think it’s necessarily an issue with the student. I feel like there’s a lot of things that go deeper than that, like the level of math they took in high school. I think there’s just so many factors that people miss.

What do you mean by the factors that so many people miss?

The culture here is that everyone kind of assumes everyone who’s enrolled comes from a very prestigious background — you know, perfect scores, they have easy access to tutoring. I don’t think a lot of people realize that some students here may have come from low-income communities, low-income schools, where they don’t have resources easily accessible.

I wouldn’t say it makes me envious, but I do wish I had that kind of abundance of resources when I was in high school. I was top of my class in high school, but that’s nothing compared to some of the other students here.

Do you think you’ve benefited from going to a school like UCSD?

I think so. It’s definitely exposed me to a lot of different cultures, a lot of different people. I think I’ve learned more soft skills here through a lot of the classes I’ve been taking. I’m more willing to try things and take courses that I would not have been interested in high school.

Do you feel like being surrounded by peers who maybe went to better high schools pushes you to do better at UCSD?

I think so. I’ve kind of noticed they have more independence. They feel more secure in their academics and what they’re doing, and I feel like I always second guess myself. Seeing how they operate, it makes me want to do better, work harder, especially after my first year. I wanted to make my own self-improvements.

One of the recommendations from the report that caused this whole blowup was that UCSD should reduce the number of students from high-poverty high schools. What do you think of that?

I do not think that should be an option. If they don’t go through the trial and error here, they’re just going to go through it somewhere else. College is a place where you learn new things. Sometimes you’ll fail and you just take the L and learn from it.

Matt Barnum is Chalkbeat’s ideas editor. Reach him at mbarnum@chalkbeat.org.



Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

AI Music vs. My Parents

1 Share

My folks were taken in by the latest algorithmic “artist,” and it scares me

The post AI Music vs. My Parents appeared first on Nautilus.



Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

The College Board’s New Method for Raising AP Scores

1 Share

by John Moscatiello

The College Board has released a preliminary explanation of what I have called the Great Recalibration of AP Exams. The report confirms that the central thesis of my article was correct: hundreds of thousands of AP scores have been intentionally “recalibrated” upward since 2022. While I avoided sensational headlines about score “inflation,” I called the increase in scores a “radical transformation” of the AP program. But I had no idea just how radical the transformation actually was until the College Board released this report. 

Apparently, the College Board has not just been raising AP scores since 2022, it has completely reinvented the methodology it uses to assign AP scores. And this methodology has been designed to achieve a very specific result. It ensures that virtually all AP Exams have “the same 60%–80% success rates” (i.e., AP scores of 3 or higher). This methodology only produces this pre-determined outcome. For most AP subjects, that means the scores stay the same, but for 14 AP subjects, (including many of the most popular subjects), it produces massive increases in scores. Whatever this methodology is, it never results in lower AP scores.

The new methodology, which the College Board calls “Evidence Based Standard Setting (EBSS),” was nowhere to be found on the College Board’s website until very recently, even though the report claims it has been used for the last three AP Exam seasons. As recently as May of this year, the pre-2022 scoring methodology was presented on the College Board’s website to explain how AP scores are assigned. In other words, the College Board’s public explanation of AP scoring has been inaccurate for the past three years. This is important because that page is virtually the only public source for this information, apart from some presentations at AP conferences and events.

How EBSS Works

This new EBSS methodology has added layers of complexity to an already opaque process. I am neither a psychometrician nor a statistician, so I am not qualified to comment on the 2013 study cited in the College Board report as the basis for this framework. But the version of EBSS methodology in the College Board’s report seems to connect all kinds of disparate data into a single framework. The report does not explain how all this data forms a coherent methodology, so we can only assess these individual data points on their own terms. 

The College Board’s report insists that the EBSS process “is especially well-suited for ensuring that AP standards and scores are not tugged higher by the well-documented increases in college grades over the past 30 years.” That seems like good news: the “inflation” of college grades is not causing an “inflation” of AP scores. But then this table appears, explicitly comparing college grades to AP scores with no explanation of how AP scores are not being influenced by higher college grades.

Then the College Board presents two more tables explicitly comparing AP scores to college grades. I don’t understand. Are college grades used to help determine AP Exam scores or not? How is college grade “inflation” not “inflating” AP scores? 

The EBSS methodology goes beyond comparing AP scores and college grades. Incredibly, the methodology uses 10th-grade PSAT scores to help compare college history classes to the AP U.S. History Exam. Besides the College Board’s own warning “not to ‘overuse’ test results” beyond their specific purpose, the connection between 10th-grade PSAT scores and college course performance is tenuous at best. And look at the column on the left: college grades are listed with AP scores with no explanation of how AP scores “are not tugged higher” by college grades.

The EBSS methodology sometimes incorporates data that isn’t really data. Everyone who has been to high school and college knows that high school students spend more time in the classroom than college students do. Do we really need a whole bar graph to tell us this? This has been true for the entire history of the Advanced Placement program, so why would scores be increasing now? This graph also reminds us just how different college courses and AP courses actually are: the instructional hours are different, the pacing is different, the assignments are different. How does this graph help account for a 24% increase in AP scores of 3 or higher in a single year?

Finally, there is an awkwardly written claim that seems to imply that the College Board is mining data from its AP Classroom platform to help establish AP scores. The report praises AP Classroom for providing “more granular and targeted student performance data that is now available within a very short operational window for analysts to utilize for identifying student performance at basic, moderate, and exceptional levels.” The College Board has conceded that some materials in AP Classroom are not well aligned to real AP Exams. AP teachers have been instructed not to use the personal progress checks (PPCs) in AP Classroom to inform classroom grades, but the College Board is using them (or other questions?) to inform actual AP scores? Hopefully, a future report can clarify whether student performance in AP Classroom is in any way influencing how actual AP scores are determined.

“Easier” Rubrics

The report includes a section about “easier” AP History rubrics. Apparently, there were so many scores clustered at the lowest end of the rubric that they had to lower the bar to create a wider distribution: “If all points on the rubric are equally difficult to obtain, the scoring process will not generate as much data about students at the novice and intermediate levels of performance as it will about the most advanced students.” This is a perfectly valid reason for changing the rubrics, but it raises a legitimate question about why so many students were clustered at the lowest end of the scoring scale in the first place.

In this section, we also learn that the “complexity” point on AP History rubrics “provided no measurement value” because it “was rarely used by graders.” This of course raises questions about the similar “sophistication” point on AP English rubrics, which only about 8% of students earn. By this reasoning, the sophistication point should be simplified too. Is the future of Advanced Placement one in which complexity is less complex and sophistication less sophisticated?

The Future of AP Scores

The College Board should be commended for releasing this report. It provides much more insight into the process than has been previously revealed. But AP teachers have rightly been confused and frustrated by the lack of transparency until this point. Why are we learning about this methodology for the first time three years into the process? 

And why has the process been dragged out over several years? The College Board presented data in 2021 that showed the need to raise AP English Language scores, yet that exam has not been recalibrated. This report has also confirmed that five more AP subjects (AP English Language, AP Environmental Science, AP Human Geography, AP Latin, and AP Physics 1) will also be recalibrated according to the EBSS methodology. We do not know when they will be adjusted, but at least we have clarity about what to expect in the coming years.

The future of AP scores is now becoming clearer. By 2025, the Advanced Placement program will look very different than it did just a few years ago. We now know that nearly all AP Exams will be digital in 2025. We know the answer to Chester Finn’s question “Are AP Exams Getting Easier?” The answer is yes. Between 2022 and 2026, approximately 1 million more AP Exams will receive scores of 3 or higher as a result of the College Board’s new method for raising AP scores. As a teacher who wants students to succeed and earn college credits, I welcome the change. As an observer of the standardized testing space for the past two decades, I am amazed that it has taken three years for us to learn anything about this process.


Follow us on Facebook or X for the latest updates on AP Exams.

John Moscatiello

John Moscatiello is the founder of Marco Learning. He has been a teacher, tutor, and author since 2002. Over the course of his career, John has taught more than 4,000 students, trained hundreds of teachers, written content for 13 test preparation books, and worked as an educational consultant in more than 20 countries around the world.

The post The College Board’s New Method for Raising AP Scores appeared first on Marco Learning.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Reality emerges

1 Share

Abstract digital artwork with multicoloured nodes and connecting lines on a brown background resembling a network or graph.

Particles are nature’s smallest constituents, but that doesn’t mean they’re fundamental. So what is the Universe made of?

- by Felix Flicker

Read on Aeon

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

AI didn't delete your database, you did

1 Share

Last week, a tweet went viral showing a guy claiming that a Cursor/Claude agent deleted his company's production database. We watched from the sidelines as he tried to get a confession from the agent: "Why did you delete it when you were told never to perform this action?" Then he tried to parse the answer to either learn from his mistake or warn us about the dangers of AI agents.

I have a question too: why do you have an API endpoint that deletes your entire production database? His post rambled on about false marketing in AI, bad customer support, and so on. What was missing was accountability.

I'm not one to blindly defend AI, I always err on the side of caution. But I also know you can't blame a tool for your own mistakes.

In 2010, I worked with a company that had a very manual deployment process. We used SVN for version control. To deploy, we had to copy trunk, the equivalent of the master branch, into a release folder labeled with a release date. Then we made a second copy of that release and called it "current." That way, pulling the current folder always gave you the latest release.

One day, while deploying, I accidentally copied trunk twice. To fix it via the CLI, I edited my previous command to delete the duplicate. Then I continued the deployment without any issues... or so I thought. Turns out, I hadn't deleted the duplicate copy at all. I had edited the wrong command and deleted trunk instead. Later that day, another developer was confused when he couldn't find it.

All hell broke loose. Managers scrambled, meetings were called. By the time the news reached my team, the lead developer had already run a command to revert the deletion. He checked the logs, saw that I was responsible, and my next task was to write a script to automate our deployment process so this kind of mistake couldn't happen again. Before the day was over, we had a more robust system in place. One that eventually grew into a full CI/CD pipeline.

Automation helps eliminate the silly mistakes that come with manual, repetitive work. We could have easily gone around asking "Why didn't SVN prevent us from deleting trunk?" But the real problem was our manual process. Unlike machines, we can't repeat a task exactly the same way every single day. We are bound to slip up eventually.

With AI generating large swaths of code, we get the illusion of that same security. But automation means doing the same thing the same way every time. AI is more like me copying and pasting branches, it's bound to make mistakes, and it's not equipped to explain why it did what it did. The terms we use, like "thinking" and "reasoning," may look like reflection from an intelligent agent. But these are marketing terms slapped on top of AI. In reality, the models are still just generating tokens.

Now, back to the main problem this guy faced. Why does a public-facing API that can delete all your production databases even exist? If the AI hadn't called that endpoint, someone else eventually would have. It's like putting a self-destruct button on your car's dashboard. You have every reason not to press it, because you like your car and it takes you from point A to point B. But a motivated toddler who wiggles out of his car seat will hit that big red button the moment he sees it. You can't then interrogate the child about his reasoning. Mine would have answered simply: "I did it because I did it."

I suspect a large part of this company's application was vibe-coded. The software architects used AI to spec the product from AI-generated descriptions provided by the product team. The developers used AI to write the code. The reviewers used AI to approve it. Now, when a bug appears, the only option is to interrogate yet another AI for answers, probably not even running on the same GPU that generated the original code. You can't blame the GPU!

The simple solution is know what you're deploying to production. The more realistic one is, if you're going to use AI extensively, build a process where competent developers use it as a tool to augment their work, not a way to avoid accountability. And please, don't let your CEO or CTO write the code.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Influential study touting ChatGPT in education retracted over red flags

1 Share

A study that claimed OpenAI’s ChatGPT can positively impact student learning has been retracted nearly one year after publication. The journal publisher, Springer Nature, cited “discrepancies” in the analysis and a lack of confidence in the conclusions—but not before the paper racked up hundreds of citations and made the rounds on social media.

“The paper's authors made some very attention-grabbing claims about the benefits of ChatGPT on learning outcomes,” said Ben Williamson, a senior lecturer at the Centre for Research in Digital Education and the Edinburgh Futures Institute at the University of Edinburgh in Scotland, in an email to Ars. “It was treated by many on social media as one of the first pieces of hard, gold standard evidence that ChatGPT, and generative AI more broadly, benefits learners.”

The retracted paper attempted to quantify “the effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking” by analyzing results from 51 previous research studies. Its meta-analysis calculated the effect size between various studies’ experimental groups that used ChatGPT in education and control groups that did not use the AI chatbot.

Read full article

Comments



Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories