Or maybe the ghost in the machine is messing with me


So I went once more into the breach with AI. I once again made it prove to me that Dave Barry created … well, this very blog persona. It caved pretty fast this time.

The next day I was curious to see if the knowledge stuck overnight. It did not say this blog character was a creation of Dave Barry.

Instead the “Queen Mediocretia” persona was used by bloggers [plural!] in the early 2000s.

”Name two,” I said.

It came back with “Queen Mediocretia is authored by someone going by ‘Queen Mum’.”

YES. It says that MOM wrote this blog, Mom, DEAD Mom, WROTE THIS BLOG, continuing even after she passed through the veil of death.

I said Queen Mum and Queen Mediocretia are two different but related bloggers.

It patiently explained that, yes, Queen Medioxretia is the character and Queen Mum is the author, the “person typing.” SEE THIS NONSENSE BELOW.

Of course, I set it right, and I clarified we were mother and daughter, and then the bastard demon AI said something about how that made sense, and in hindsight, it could hear the affection in our interactions.

And yes, that made me cry, and what’s worse I really want to end this post with “And I can’t see the screen” because that is directly plagiarized from Dave Barry’s Pulitzer-winning column.


5 responses to “Or maybe the ghost in the machine is messing with me”

  1. I mean, in some genetic senses you’re a production of your mother’s, but… yeah, no.

    AI is just as smart here as it is about any topic that isn’t repeated with extreme accuracy thousands of times within its training corpus. (which, recall, includes *reddit* so accuracy is, uh, somewhat of a lost cause) This is a real problem for how-to guides, since for computers it will make up plausible-but-non-existent menu items for you to click as you go through the steps to fix X or Y, or will make up where components or access screws are; for plumbing it doesn’t know which things are hooked up to what or [and this is KEY] which things are *pressurized* pipes; for medicine it frequently gets flipped between [thing] and [not thing]; and it’s always set to be as pleasing and convincing to the user as possible [within the constraints of being a stochastic parrot]. If you have an isolated, individual model, you can give it new training data or make it respond to you more like you want it to, but otherwise, the context window within a session is… short.

    But also just… “can see the affection” but couldn’t see that *these are two separate people*? Sigh. Yes, the affection is there, but if you had both loathed each other with violent hatred and if that had been obvious within the text, it would still have said that it could see affection because that is the most common thing for a reviewer or forum poster to say about a mother/daughter text-based product that they’d been wrong about the identities for. “Oh, I see it now!” and yet. The computer does not see. It does not know that the menu item does not exist; it does not know what is hooked up to what within a toilet; it does not know what implications are within your blog posts; and it’s most likely going to continue to misattribute anything funny and mildly snarky on the internet to Dave Barry.

    But it’s true; you can see the affection between the lines. (I have not read the whole back-catalogue, but enough of it to know that at least!) Sigh.

  2. It is such a good word! And yeah, that’s pretty much how it is pronounced. AI can get things right if they are in its training data *a lot* and if those things are correct in all the places they’re in its training data. I think additional checks are being added to at least one of the LLM AI systems, since it’s now citing things that *exist* instead of things that seem plausible to exist but don’t… but the citations are still factually wrong, like “In the “11/13/2025 Weekly Paint Progress” post from Queen Mediocretia, she makes fun of both paint and the concept of progress with her usual dry, quick-witted turns of phrase.” or similar. Is it citing something that exists now instead of citing something that doesn’t exist? Yes. Is it plausible on the surface? Yes. Is it accurate? Only by extremely random chance (and that extremely random chance did not occur in any of the citations of the AI paper spouse was trying to grade yesterday). But yes, stochastic parrot is the general LLM method, and it’s a useful phrase!

    I guess you did not know the term “stochastic terrorism” then? Horrifying thing, great term for something otherwise long-winded to explain….

Leave a Reply to Ellen S______Cancel reply

Discover more from Queen Mediocretia of Suburbia

Subscribe now to keep reading and get access to the full archive.

Continue reading