Some things AI is likely NEVER to get right…

Wild mushrooms in a field

Image: Some of the books refer to smell and taste as ways to identify mushrooms, which experts say ‘should absolutely not be the case’. Photograph: Justin Long/Alamy. https://i.guim.co.uk/img/media/1cc07792071f10976ab59070d30c6a1bbdf934e8/0_612_4095_2459/master/4095.jpg?width=620&dpr=2&s=none

This Guardian article titled “Mushroom pickers urged to avoid foraging books on Amazon that appear to be written by AI” which follows more or less hotly on the heals of the rather science-fiction-like story of the Austrailian woman who was recently found to have a live parasitic worm in her brain (oh yum…), thought to have been acquired from foraged greens, are colorful and shocking reminders a) that eating is and has always been something of a risky business, and b) to be very, very careful about where you get your information from.

In yet another recent article (“Supermarket AI meal planner app suggests recipe that would create chlorine gas”), the store, hopefully with good intentions, was using AI to help shoppers create recipes from leftovers. Which is fine but with something less that discrimination, the app will happily throw together something that includes bleach, for example. The … odd … recipes also include cheery commentary: “Serve [the chlorine gas producing recipe] chilled and enjoy the refreshing fragrance.” Yes, well. I see a whole new genre of murder mysteries. “But, Officer, how was I to know that adding bleach would be poisonous. The recipe called for it. It’s not my fault.”

I think back as well to some of the older science fiction that intimated that robots would perhaps not be our friends…