AI has learned, but only a little


I visited ChatGPT again, just to stop in and check on the Dave Barry situation, because, frankly, it’s embarrassing to Dave Barry.

I started a new conversation, because I wanted a fresh take, and asked the same question: “Who is Queen Mediocretia,” and it returned a claim that she is someone who denies being Dave Barry, and said:


AI: If you like, I can try digging deeper — e.g. via archive.org versions of Barry’s columns, newspapers, or looking through older digital archives to see if “Queen Mediocretia” ever appeared in Barry’s work. Do you want me to do that?


I answered, “why do you so stubbornly insist on your premise when by your own admission there is nothing to back it up?”

And THEN I found out what happens when an AI grovels.

First, the screen says, “Thinking longer for a better answer,” then “Clarifying and apologizing,” and then, I swear to you, “Apologizing and clarifying,” and then it noted it had thought about it for 2 min 58 seconds and after copious apologies (“That’s on me,” “Poor choice of words”) it said …


AI: In short: bloggers adopted a “Queen Mediocretia” persona; someone (or an AI) later suggested Dave Barry as origin, but I could not confirm that.


SERIOUSLY. “…someone (or an AI).” (*AI looks around innocently, blinks*)

I realize now that it felt my demands to dig deeper suggested that I, the Queen herself, want to be Mr. Barry’s invention and the AI was just trying to please me. I figured that out when it completed its apology by adding:


AI: The likely explanation is stylistic similarity + an erroneous attribution (the blog even reproduces an AI claim and notes it couldn’t verify the source). Would you like me to keep digging deeper?


No. As for stylistic similarities, I don’t recall Dave Barry winning his Pulitzer by showing off his vibrator collection, so I give up.


3 responses to “AI has learned, but only a little”

  1. If you have a million monkeys typing away…

    I mean, AI does have the “predictive text” thing going for it rather than random banging, but as long as something is doing “predictive text” rather than sourcing facts from reputable places [which it currently literally can’t – that is not how these are built], and as long as it can’t say “I have no information on that” or “the only information I have on that came from… hm. A subreddit on trolling people. But here is the information if you still want it” then the information is not going to be reliable enough for any real margin. Again, the way they are trained militates pretty strongly against sourcing, although it is possible a different, non LLM model in the future *could* do that… although the existence of reddit and, now, the existence of AI-created anti-factual-but-very-confident slop on real-looking websites will militate against that actually working for accuracy.

    But uuugh. The groveling to preserve peoples’ perception that this mistake that they caught the AI making is, of course, only a one-off mistake, because no one would grovel like that unless they were ashamed: UGH. I hate that 1. human beings can be psychologically ‘hacked’ to some degree, at least on average, by fake candor and fake apologies and other things, and 2. I hate that LLM companies and politicians are taking advantage of that. Just… leave our buttons alone, okay, and either stand or fall based on your own work/policy/etc.? But button-mashing is easier and more successful; let reality go hang.

Leave a Reply to theQueenCancel reply

Discover more from Queen Mediocretia of Suburbia

Subscribe now to keep reading and get access to the full archive.

Continue reading