India Leap Month


As I said last week, the daffodils are on their way.

I needed to draw a daffodil on the calendar to mark the last day of February, which is often the advent of the daffodils. And for that, I needed to know if the last day was the 28th, or the 29th, so I asked Alexa, “Is twenty twenty two a leap year?”

13D8514E-E4BF-42A7-90F7-F3DD33CF5FEE

Alexa said yes, the “coming leap year is 2022.”

Cool, I said.

343BAF2D-C350-445A-B812-CFD2552C302F

And YES I KNOW ALEXA LIED. I know that because people at work corrected me.

I needed proof that it was not my fault, that Alexa had lied, and that was when I noticed that her reference was the India Times.

“Could India have different leap days?” I thought, and yes, they do indeed. There are leap days and leap MONTHS and they add and remove them at will based on complicated rules.

Of course, everything in that article might be a lie. I have no confidence now that I can’t trust Alexa.

1B4E1CF4-BD93-4D45-A45E-1853BB57A4D0

11 responses to “India Leap Month”

  1. This is part of the reason why using AI for important things (criminal justice cases, resume prediction, etc.) is a HORRIBLE IDEA, because if (really, “when” – all our datasets have issues) the AI is “learning” from something irrelevant or erroneous (or, as in the case of resume predicting, further emphasizing biases that existed but were less strong in the dataset handed to it; the AI that decided that being a woman was enough to predict that a candidate would be less qualified for comp sci positions because behold, fewer women are hired…), you will never know what has gone wrong, unlike when Alexa makes decisions based on the India Times.
    And yet, here we are.

  2. What was the AIApp supposed to do? (it’s not that all AI is bad; I love Google’s AI-based “were you searching for?” thing, except when they force it so the search results are what they thought you were searching for and won’t let you swap back; some percentage of the time, I really *am* searching for the more obscure thing that is also a common misspelling of a normal search term…)(you just always need it to be verified and verifiable, and people are leaning more on black-box “of course it works, it’s AI trained on [x]-million examples!” and that is… really extraordinarily bad in many important situations, esp. medical and judicial.)
    (of course, sometimes humans miss things, too; my grandfather was *furious* when it turned out that my step-grandma could have had her lung cancer caught at a time when it would have been easier to treat, if the person checking her shoulder x-ray had glanced at all at the lung also in the x-ray. But an AI, unless it was told to alert for *all* potential problems rather than only at the shoulder joint, would certainly have missed it, whereas a human might or might not catch the “there is a problem in addition to what we were looking for” thing.)

  3. Ultra-basic customer-service chat, as long as people have a No Really I DO Need A HUMAN way to opt out,i is something that I think AI could probably do, although man, I’d rather have 1. useful automated menus for the most common, mechanically fixable cases and then 2. helpful-if-bored humans and a slightly more expensive product.
    Chat for training would definitely be AI-able, though, I think. You’d have to make sure it covered the right cases…
    Chat for entertainment and/or for seeing how well you can do with the Turing test: absolutely. 🙂

  4. Do you have to get permission for all software, or only for software *purchases*…? (and what language or data format are you working in?)

  5. KC – oh it is very strictly regulated. If there’s free sort where I downloaded it gets wiped off remotely. All must conform.

Comment, even if you aren't on WordPress. Make up a name. Fine by me.

Discover more from Queen Mediocretia of Suburbia

Subscribe now to keep reading and get access to the full archive.

Continue reading