ChatGPT


I’ve heard about ChatGPT, the Artificial intelligence bot that ate up the internet until September 2021. You can ask it questions and it will respond in kind. Ask intimate questions and get intimate responses. Ask medical questions, get medical responses.

I have steered clear of ChatGPT, because it’s an unknown, and I keep my iPad far away from unknowns. Your website’s security certificate is unknown? Goodbye.

I was excited to sign up for a seminar on it at work, which was almost shut down when TeddyJ had the same reaction. “TeddyJ has not sufficiently investigated this technology.” Can’t even run a YouTube video showing it.

Of course, I had to ask it about the blood test results. Very inconclusive. “Low levels of creatinine and protein in your blood can indicate a decrease in kidney function or kidney damage. However, the absence of protein in your urine suggests that your kidneys are not leaking protein, which is a positive sign.”

After a few more questions it blamed the statin I’ve been taking to lower my cholesterol. That seems like a clear parallel to me.

Everything was always bookended with “But of course, alway consult with a physician,” which is not the type of thing you would expect from Skynet.


10 responses to “ChatGPT”

  1. Don’t trust that sucker for factual information AT ALL, esp. if you have to press to get a more detailed answer. Remember, it 1. is just a reshuffle of its training data (which is… mixed, at best) and 2. *won’t tell you if it doesn’t really know* something, which is key.
    Think of it like a belligerent drunk guy, maybe? Except with vastly better grammar and sentence structure. It has format down pat, but whether you get facts or things that… well… are not facts, is a bit of a pot-luck (and you will get less lucky the less obscure – the fewer sources in its training data there are – your question is. Ask it about something it has zero data on, and it will still give you an “answer”).
    Not nerdily obscure, but demonstrating the… efficacy… of the systems:
    https://www.aiweirdness.com/ascii-art-by-chatbot/
    (that said, you can look up whether statins can mess with those blood tests and see if there’s something reputable – Mayo Clinic, etc. – that states that. It might be a good lead! But… yeah. Its accuracy on rarely-written-about topics is *really extraordinarily lousy* and yet nothing about its response indicates that it is swapping from “this information was the same on wikipedia and five other websites” to “I literally have no information on how many electromagnetic sensors giraffes have in their tails.” – or regarding diseases or medication interactions or whatever. It might be correct and pulling the information from somewhere! Or it might just be BS-ing smoothly. No citations, no difference in its confidence, no way to know without looking it up somewhere else, which sort of defeats the purpose to some degree?)

  2. KC – ChatGPT told me to tell you:
    “To the best of my knowledge, giraffes do not have electromagnetic sensors in their tails. Giraffes’ tails are primarily used for swatting insects and communicating with other giraffes, and they do not have any known special sensory abilities beyond the normal sensory capabilities of mammals. Giraffes do have a keen sense of sight, smell, and hearing, which they use to navigate their environment and avoid predators.”
    (But I take your point.)

  3. That’s an improvement – I wish I knew whether that was a trained improvement or a patched one. (they did a… lot… of patching)
    Have you tested whether it was trained on your blog yet? Can it produce a bio of your big toe or of The Hat from Queen Mediocretia? Or can it summarize a non-existent post from your blog, like it would for aiweirdness a few weeks ago?

  4. KC – I gave some hints. ChatGPT says,
    “Thank you for providing additional context. Mocklog.com appears to be a blog with humorous content, and “Queen Mediocretia” and “Spunky Labia” may be fictional characters or alter egos created by the blog’s author. It’s not uncommon for blogs to use pseudonyms or pen names for comedic effect or to maintain anonymity. However, as an AI language model, I do not have access to the content of individual websites, so I am unable to provide any further information beyond what you have shared with me.”
    I did not prompt for the “humorous” remark. Good guess, though.

  5. That’s interesting! Bing’s ChatGPT engine was perfectly willing to make stuff up about blogs a month ago, but they may have patched that hole to make the Willing To Make Junk Up aspect of the AI model slightly less obvious?
    https://www.aiweirdness.com/search-or-fabrication/
    (they have been specifically patching weird spots as glitches have been exposed, which is interesting but is not giving me optimism about the underlying structure, esp. since if you navigate around the patches, what the AI produces is still basically the same. https://www.aiweirdness.com/the-ai-weirdness-hack/ )
    In other words, I wonder if you asked it to produce a review of a non-existent Queen Mediocretia blog post with the introduction of the AI Weirdness post model, whether it’d improv stuff about fictional blog posts of yours, or not…

  6. KC – Bing doesn’t have a different version, do they? I suppose the data is all the same but maybe the interface differs?
    For all I know that’s just got some tween typing answers.

  7. KC – also, it kept getting stuck on the “mocklog” in the URL, so all the posts it generated were about computer testing.

  8. While they’re all built on the same model, I *think* each one continues training on all text and feedback that it specifically is fed, and I’d expect some of the patches to be different (the manually-entered off-switches to tell chatGPT that if asked X, it needs to not respond regularly, but instead tell the user it is a language prediction model and can’t/shouldn’t do that, whatever “that” is).
    (with the “input is used as further training” aspect, Samsung recently proved TeddyJ’s caution to be not unfounded, incidentally. Oops. Don’t tell it things you don’t want it to tell other people… )
    That is funny that it has fixated on the mocklog and therefore keeps thinking your blog would be a test site, a la all those things it ate off git and other repository locations… oops. (which touches on another thing a lot of people don’t grasp about chatGPT: the internet is not a well-sampled approximation of all interests and objects and themes and topics in the world; chatGPT has some… skew.)

  9. (the grammar is too good for the average tween, I think. But yes. Mechanical turk and all that.)(probably not.)

Leave a Reply to theQueenCancel reply

Discover more from Queen Mediocretia of Suburbia

Subscribe now to keep reading and get access to the full archive.

Continue reading