755 stories
·
1 follower

Horseshoe Crab Diary

1 Share

For The Paris Review, Grace Byron recounts how her obsession with horseshoe crabs began with a visit to the Jamaica Bay Wildlife Refuge to spot birds. Her interest veers from curiosity into conservation when she and a couple of friends volunteer for a horseshoe monitoring session with the NYC Bird Alliance. There, they helped log and tag horseshoe crabs emerging from the water at high tide to mate on Plumb Beach.

Agnes and Ashe waited behind, picking up trash as Ann gave us the outline of our night. She kept checking her watch, waiting for the precise moment of high tide: 7:29 P.M. Using two white pipes connected into a square, we prepared to survey the number of horseshoe crabs present that night at random. The pace at which we had to stop and check for critters was a bizarre math problem that I couldn’t quite follow. Meanwhile, two volunteers carried clipboards to note the number of horseshoe crabs in the sample field. I looked around. So far there weren’t so many. As requested, we looked around and grabbed the numbers of those already tagged.

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

Is math discovered or invented?

1 Share
Math is invented by fauns, nymphs, and other minor deities. They leave it lying around, partially hidden, for humans to "discover," in the same way that one might leave Easter eggs in the yard for a small and gullible child.
Both. Pure math is invented nonsense, passed off as if it were authored by nature. Thus, it is best described as FORGERY or FABRICATION. Meanwhile, applied math is the human attempt to take credit for discovering nature's work. Thus, it is best described as PLAGIARISM or INTELLECTUAL THEFT.
Neither. Math is excreted, and later congeals.
Asking whether math is discovered or invented is like asking the same of babies. If that's how you're phrasing the question, then you're pretty clearly not ready for the answer.
Read the whole story
mrmarchant
16 hours ago
reply
Share this story
Delete

Busy Beaver Hunters Reach Numbers That Overwhelm Ordinary Math

1 Share

Imagine that someone gives you a list of five numbers: 1, 6, 21, 107 and — wait for it — 47,176,870. Can you guess what comes next? If you’re stumped, you’re not alone. These are the first five busy beaver numbers. They form a sequence that’s intimately tied to one of the most notoriously difficult questions in theoretical computer science. Determining the values of busy beaver numbers is a…

Source



Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Oops! I created a viral international manhunt to find a boyfriend

1 Share

Oops! I did it again. I went viral for looking for love, and now millions of people know I have zero bitches, and CNN contacted me about it. Only this time, the whole gang is in it together.

It all started when my friends were lamenting about not having girlfriends, and I jokingly offered to find them ones via flyer (I am notorious for flyering for my schemes, e.g. for Sit Club, an anti-run club, and Strippers for Charity, a charity gala). But my friends were actually interested, and so dear reader, we embarked on our quest to find love.

subscribe to 𝓇𝒶𝓌 & 𝒻𝑒𝓇𝒶𝓁 to support the arts

I took inspiration from the “personals” section of vintage newspapers and made a website sfpersonals.com.

examples of the “personals” section of old newspapers

Then I gathered my most eligible single friends at my office hours ($1 margarita night), pitched the idea, and interviewed them. (You may already be familiar with my investigative journalism if you went to Tam High in 2017, where I was the newspaper’s Editor-in-Chief. Go Hawks!)

I wrote about each friend like a character you could easily imagine in your mind - colorful, extremely specific, and low-key roasting them.

my website!

I also described their ideal partner as a character you can imagine in your head, figuring this would make readers who resonate even more interested, and maybe make readers think of a friend who had those qualities, so they’d forward it along. Each bachelor/bachelorette also had a key question they asked their admirers, to help them vet contenders.

I figured that to reach the right people, this would have to go viral, so it’d have to be funny and entertaining to everyone, not just people looking for a relationship.

I added a crossword, because newspapers have crosswords. Also, if you don’t like hot singles, then you probably like crosswords, and that’s how I’d capture the whole market.

you either like hot singles, or crosswords, or both!

I included some extremely unhelpful FAQs, based not on actual frequently asked questions, but on questions I preemptively thought would be funny to answer.

questions, though not necessarily those of the “frequently asked” kind

And yes, I did put myself on the site, because I am a diligent Product Mommy who dogfoods her product (and I may or may not have ulterior motives).

I also made my friends promise that if I find their future spouses, I get to officiate their wedding. You see, three years ago I became a licensed Reverend because it’s fun and easy to do, and every year since, I’ve received an email congratulating me on another year of being a Reverend, unknowingly relentlessly taunting me for still having yet to officiate a wedding. This weighs on me greatly.

they taunt me!

Here’s the thing: dating apps are counter-incentivized, they earn money by keeping you single. However, my incentives are aligned, I want to find my friends partners so that I get to officiate their weddings. My Certificate of Ministry is burning a hole in my pocket.

Finally, to advertise my sexy singles, I put flyers up around San Francisco.

dating flyers!

Within 24 hours, a dating coach posted a picture on Twitter, and it blew up. Then it went viral on Substack, Reddit’s r/sanfrancisco, on the literal front page of Reddit itself, Reels, Facebook, Threads, even Pinterest???

sfpersonals.com transversing across the internet

It amassed millions of views, hundreds of thousands of likes, and 600+ people emailed in to date my friends.

The emails were extremely funny.

  • Thousands of people thought Anna and Will should date. The comments section of every post was filled with it, and people wrote long, elaborate theories on why they weren’t already dating, devising fan-fiction basically. Dozens of people emailed just to say they should date, and someone even wrote a multi-paragraph essay about it. Because of this, Anna and Will went on a date. I heard it went alright.

  • Cougars loved Alex and he discovered that the feelings were mutual. And so he increased his age range.

  • People fucked heavy with the crossword. They would email me just to ask if they completed the crossword correctly.

  • Anna has two exes and they’re both Chinese and named Kevin. I told her, “Anna, you just haven’t met the right Chinese Kevin yet.” Then a man wrote in who met exactly what Anna was asking for, and was Chinese and named Kevin. Her soulmate, I think.

  • Someone sent the most condescending but well-meaning email titled: “Anna is autistic and that’s wonderful.” Also, someone armchair diagnosed Will with Marfan’s Syndrome. None of you people are doctors, what are you even talking about.

  • A women wrote that she was happily partnered, however, she did want to share a multi-page self-insert story of her rescuing Mehran from prison.

  • Four separate men answered my question (“how will you support my projects like Strippers for Charity”) by saying they’d MC the event?? Like some guys will really read “how will you support my hard work” and reply, “well I’ll take over as the star of the show.”

  • Multiple people made LinkedIn accounts just to apply to date us (I should really be getting a commission here).

  • A group of high schoolers saw me in the wild putting up posters and asked to take a photo with me (I guess this is how celebrities feel??)

  • Two chaotic bisexual queens offered to date any of us.

The scheme went viral internationally, gaining traction in some really random places. Why were people in Taiwan reading this. Who are these people.

some stats for you data nerds

Only ~5k (3.6% of) people who visited the site actually lived in San Francisco. No, the “SF” in “sfpersonals” does not stand for “single friends,” as one commenter thought.

And The Media once again slid into my DMs.

hehehehe

I was also invited to present this project at Demos & Chill! And my friends went on dates with their admirers, but those are not my stories to share.

So. Ultimately, did this work?

No.

But neither do most relationships! Also, it’s important to remember that 50% of marriages end in divorce anyway. Maybe the real relationship was the clout we gained along the way.

To be serious though, it’s only been two months since I launched this and many prospects are still in the pipeline. Plus it keeps randomly blowing up, so it’s possible the right people just haven’t seen it yet. If a marriage does come from this, I’ll make sure my dear 𝓇𝒶𝓌 & 𝒻𝑒𝓇𝒶𝓁 readers are the first to know.

I have some more personal reflections about this project, but that’s so cringe, so I’m hiding them behind the paywall like a coward. An entrepreneurial coward. If you wanna be nosy, there’s a price to pay ($6.90).

Thanks for reading my blog, love ya!

subscribe to 𝓇𝒶𝓌 and 𝒻𝑒𝓇𝒶𝓁 to support the arts, and because it’s the right thing to do

P.S. Read about my other dating-related scheme, the time I made a survey for guys who want to date me (as a joke) but after 400+ responses, I felt an obligation to the scientific community to write a research paper. And then Substack featured it in their Weekender.

P.P.S. If you’re a loyal 𝓇𝒶𝓌 & 𝒻𝑒𝓇𝒶𝓁 reader, you may recognize Cool Alex from the SF Personals site, as we met him through our Alextravaganza (Alex-themed party we threw).

Read more

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

A Surprising Find

1 Share

From Lee Sallows:

sallows double alphamagic geomagic square

(Thanks, Lee!)

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

Big O

1 Share

Big O notation is a way of describing the performance of a function without using time. Rather than timing a function from start to finish, big O describes how the time grows as the input size increases. It is used to help understand how programs will perform across a range of inputs.

In this post I'm going to cover 4 frequently-used categories of big O notation: constant, logarithmic, linear, and quadratic. Don't worry if these words mean nothing to you right now. I'm going to talk about them in detail, as well as visualise them, throughout this post.

Before you scroll! This post has been sponsored by the wonderful folks at ittybit, and their API for working with videos, images, and audio. If you need to store, encode, or get intelligence from the media files in your app, check them out!

# Iterating

Consider this JavaScript function that calculates the sum of 1 to n.

function sum(n) {
  let total = 0;
  for (let i = 1; i <= n; i++) {
    total += i;
  }
  return total;
}

I'm going to pass in 1 billion, written using the shorthand 1e9, so it takes a noticeable amount of time to run. Press the button below to measure how long it takes to calculate the sum of 1 to 1 billion.

function sum(n) { let total = 0; for (let i = 1; i <= n; i++) { total += i; } return total; } const result = sum(1e9);

On my laptop this takes just under 1 second. The time you get may be different, and it may vary by a few tens of milliseconds each time you run it. This is expected.

Timing code this way, by taking the start time and the end time and finding the difference, is called wall-clock time.

How long do you think 2 billion, 2e9, will take?

function sum(n) { let total = 0; for (let i = 1; i <= n; i++) { total += i; } return total; } const result = sum(2e9);

It takes about twice as long. The code loops n times and does a single addition each time. If we passed in 3 billion, we would expect the execution time to increase by a factor of three.

The relationship between a function's input and how long it takes to execute is called its time complexity, and big O notation is how we communicate what the time complexity of a function is.

Play around with the buttons below. Each bar adds an extra 1 billion to the n that we pass to sum, so the 1e9 button calls sum(1e9), the 2e9 button calls sum(2e9), and so on. You should see 2e9 take about twice as long as 1e9, and 3e9 take about three times as long as 1e9.

function sum(n) { let total = 0; for (let i = 1; i <= n; i++) { total += i; } return total; } sum(1e9); sum(2e9); sum(3e9); sum(4e9); sum(5e9);

Because sum's wall-clock time grows at the same rate as n, e.g. sum(20) would take twice as long as sum(10), we say that sum is a "linear" function. It has a big O of n, or O(n).

Why do we use that syntax: O(n)? What is O? Why those brackets?

The "O" stands for "order," short for "order of growth." Said out loud it would be: "the order of growth of the sum function is n." O(n) is a compact way to write that. The notation was created by the German mathematician Paul Bachmann in 1894. Also, the "O" (letter) might look like a 0 (number) in some typefaces. It is always the letter O.

A different way to sum the numbers from 1 to n is to use the formula (n*(n+1))/2. Carl Friedrich Gauss discovered this formula in the early 1800s, and it's a clever way for us to avoid having to loop over all of the numbers.

Here's the result of this formula with the numbers from 1 to 5. In each case the result should be the same as doing, e.g. 1+2+3+4+5 for n=5.

  • (1*2)/2 = 2/2 = 1
  • (2*3)/2 = 6/2 = 3
  • (3*4)/2 = 12/2 = 6
  • (4*5)/2 = 20/2 = 10
  • (5*6)/2 = 30/2 = 15

Here's how sum would look if it used that formula instead of the loop we had before:

function sum(n) {
  return (n * (n + 1)) / 2;
}

How do you think the wall-clock time of this function changes as n increases? The next two examples differ by a factor of 100.

function sum(n) { return (n * (n + 1)) / 2; } const result = sum(1e9); function sum(n) { return (n * (n + 1)) / 2; } const result = sum(100e9);

This example isn't broken. Both of these functions take almost no time to at all. The variance in timing is caused by the browser, and the unpredictability of computers, not the sum function. Running each example a few times, you should see that the wall-clock time hovers around the same value for both.

We call functions like this, whose wall-clock time is about the same no matter what input you give it, constant or O(1).

Wow, so we improved our sum function from O(n) to O(1)! It always runs instantly now!

We did! Though it is crucial to remember that O(1) doesn't always mean "instant." It means that the time taken doesn't increase with the size of the input. In the case of our new sum function, it's more or less instant. But it's possible for an O(1) algorithm to take several minutes or even hours, depending on what it does.

It's also possible for an O(n) algorithm to be faster than an O(1) algorithm for some of its inputs. Eventually, though, the O(1) algorithm will outpace the O(n) algorithm as the input size grows. Play with the slider below to see an example of that.

The O(1) line is always at 20 in that graph, so why don't we say O(20) instead?

The purpose of big O notation is to describe the relationship between the input and the wall-clock time of a function. It isn't concerned with what that wall-clock time ends up being, only with how it grows.

Big O always describes growth in the smallest terms possible. It would quickly get messy if the world had to figure out what number to put inside the brackets for every function, and have those numbers be correct relative to each other. Likewise, linear functions are always O(n) and never O(2n) or O(n + 1), because O(n) is the smallest linear term.

# Sorting

Let's move away from sum and talk about a different algorithm: bubble sort.

The idea behind bubble sort is that you loop over the input array and swap elements next to each other if they are not in the desired order. You do this until you're able to complete a loop through the array without making any swaps. It's called bubble sort because of the way numbers "bubble" up to their correct position.

Below is a JavaScript implementation of this algorithm. We're going to sort these numbers in ascending order, so the desired result is 1, 2, 3, 4, 5.

You can step through this code and watch the array get sorted using the controls. steps forwards 1 line at a time, steps backwards, automatically advances forward 1 line every second, and resets back to the start.

i + 1
      function bubbleSort(a) {
        while(true) {
          let swapped = false;
          for (let i = 0; i < a.length - 1; i++) {
            if (a[i] > a[i+1]) {
              [a[i], a[i+1]] = [a[i+1], a[i]];
              swapped = true;
            }
          }
          if (!swapped) break;
        }
        return a;
      }
    
      bubbleSort([3, 2, 5, 4, 1]);
    

Now you've had a look at the algorithm in action, what do you think its big O is?

If the array is already sorted, the algorithm loops through once, does no swaps, and exits. This would be O(n). But if the array is in any other order, we need to loop through more than once. In the worst case, where the array is in reverse order, we would have to loop through n times in order to move the smallest element from the end to the start.

Looping through the n elements of our array n times results in n*n operations, or n^2. That means bubble sort is an O(n^2) algorithm. Sometimes called quadratic.

Because it's common for an algorithm's performance to depend not just on the size of the input, but also its arrangement, big O notation always describes the worst-case scenario.

You may sometimes see the Greek letter "theta", Θ, instead of the letter O in big O notation. This is used when the best and worst case have the same order of growth. We can't use it for bubble sort because the best case is O(n) and the worst case is O(n^2).

Below is another way to visualise bubble sort. It starts off with the array 5, 4, 3, 2, 1 from top to bottom. The end state should be 1, 2, 3, 4, 5 top to bottom. Each step forward represents a full loop through the array, potentially involving multiple swaps. Use Best, Worst, and Rand to see different initial configurations.

If you played around with Rand a few times you'll have noticed that you do sometimes get only a couple of iterations. Despite this, bubble sort is still O(n^2) because in the worst case you'll have to iterate over the array n times.

# Searching

The last algorithm I want to talk about is binary search.

Let's start with a game. Think of a number between 1 and 100. I'm going to try and guess what it is. Use the buttons below to tell me if your number is higher or lower than my guess.

What I'm doing is starting in the middle of the range, 50, and eliminating half of the possibilities with each guess. This is a binary search.

Using this method it will never take more than 7 guesses to find your number. This is because we start with 100 possibilities and half of the possibilities are ruled out with each guess.

The table below shows the guessing pattern for all numbers between 1 and 100, use the slider to choose a number.

When it's possible to eliminate a fraction of possibilities with every step of an algorithm, we call it logarithmic. That means that binary search is an O(log n) algorithm.

Below is a graph of the number of guesses I would need in order to figure out your number for all of the numbers between 1 and 1000.

Every time your number doubles, I need 1 extra guess to find it. If you were to pick a number between 1 and 1 billion, I would need 31 guesses at most to find it. Logarithmic growth is really slow! Below is a graph comparing it to O(n) and O(n^2), which both grow much faster.

# Putting this knowledge to work

In the previous sections of this post I've described the difference between O(1), O(log n), O(n), and O(n^2) algorithms. Let's have a look at some situations you might encounter while writing code and what you can do to improve your time complexity.

# Finding an item in a list

Here's a function that checks if a list contains a specific item.

function contains(items, target) {
  for (const item of items) {
    if (item === target) {
      return true;
    }
  }
  return false;
}

If you're looking up items in the same list lots of times, you might want to consider using a data structure that allows for faster lookups, such as a Set. Modern browsers implement Set in a way that gives O(1) time complexity for lookups.

However, don't do this:

function contains(items, target) {
  const itemSet = new Set(items);
  return itemSet.has(target);
}

Building the new Set(items) is an O(n) operation! This is because the Set constructor loops over all items in the array to add them to the set. You need to weigh the cost of this upfront work against the potential savings from faster lookups.

const items = new Set(["apple", "banana", "cherry"]);
items.has("banana") // true, and O(1)!

# Loop an array with indexes

The code below contains a common mistake I've seen dozens of times in production code. Have a read and see if you can answer:

  • What is the big O of this function?
  • How could we improve it?
function buildList(items) {
  const output = [];
  for (const item of items) {
    const index = items.indexOf(item);
    output.push(`Item ${index + 1}: ${item}`);
  }
  return output.join("\n");
}

The problem is calling .indexOf inside the loop. The .indexOf function is an O(n) operation! It works by looping over the array until it finds the target element, returning null if it doesn't. Calling it inside the loop makes the overall big O of our buildList function O(n^2)!

To fix this we can loop using an index. Looking up an element in an array by its index is O(1), so the overall big O of the function is reduced to O(n).

function buildList(items) {
  const output = [];
  for (let i = 0; i < items.length; i++) {
    output.push(`Item ${i + 1}: ${items[i]}`);
  }
  return output.join("\n");
}

You can also achieve the same result with JavaScript's .forEach((item, index) => {}) method on arrays, or Object.entries().

# Caching intermediate results

Consider this function to calculate the factorial of a number. A factorial in mathematics is written as, e.g., 5! to represent 5*4*3*2*1 or 3! to represent 3*2*1.

function factorial(n) {
  if (n === 0) {
    return 1;
  }
  return n * factorial(n - 1);
}

This function has a time complexity of O(n), but most calls to this function are going to redo work we've done before. Calling factorial(4) and then factorial(5) will mean factorial(4) is calculated twice.

We can cache the result of each calculation to avoid this redundant work.

const cache = new Map();
function factorial(n) {
  if (cache.has(n)) {
    return cache.get(n);
  }
  if (n === 0) {
    return 1;
  }
  const result = n * factorial(n - 1);
  cache.set(n, result);
  return result;
}

This makes use of the O(1) time complexity for lookups in a Map. It doesn't change the worst case time complexity of the factorial function, but it does make the average case faster at the cost of increased memory usage.

When it comes to performance, remember that you can't ever take what you read online at face value. Always test your code before and after changes to make sure you're actually improving it!

# Conclusion

Let's recap what we've learned:

  • Big O notation describes the relationship between a function's input and its wall-clock time.
  • From slowest growth to fastest growth we saw examples of:
    • O(1), constant time (best!)
    • O(log n), logarithmic time
    • O(n), linear time
    • O(n^2), quadratic time
  • We can improve the time complexity of the code we write by making better algorithmic choices and avoiding common pitfalls.

These posts take me a long time to write, and they wouldn't be possible without the support of my family, friends, sponsors, and reviewers. I'm so grateful to all of youβ€”you know who you areβ€”for making it possible for me to make these.

If you enjoyed this post, I've written a bunch more similar that you can find on my homepage. I also love hearing from folks that have questions or feedback, you can reach me via email, on Bluesky, or anonymously via my silly little ping service.

I'll leave you with one last graph comparing all of the time complexities we covered in this post.

Read the whole story
mrmarchant
3 days ago
reply
Share this story
Delete
Next Page of Stories