💾 Archived View for gemlog.blue › users › verachell › 1645235956.gmi captured on 2024-05-10 at 15:29:51. Gemini links have been rewritten to link to archived content
⬅️ Previous capture (2022-03-01)
-=-=-=-=-=-=-
Note: This post is a Gemini space version of my post originally published on November 27, 2021
Recently, the Photos app on my Android phone automatically curated some of my photos into a "highlights" album for me. I thought this was a fantastic idea. I love this use of AI - it does something quickly and easily that would otherwise take a lot of human time.
The only downside was that the AI included a close-up pic I'd taken of a spot on the skin behind my husband's ear so he could see it. Now, if this AI had instead been for a self-driving car, then too bad, we likely would've had a terrible wreck at this point - endangering myself, any passengers, and other drivers on the road. But since this application of AI for photo selection did not have any life-threatening consequences, I was all for it.
I've said before, and I'll say it again, that AI as it currently stands should not be used for self-driving purposes and I have explained clearly why.
My brief adventures in neural networks
Indeed, I believe that AI should not be used for any purpose that may have life-threatening consequences.
Heck, anyone who has read Cathy O'Neil's book Weapons of Math Destruction, can see plenty of examples of AI or algorithmic approaches that have horrifically negative consequences that no-one would want to experience, even without being life-threatening.
[https] O'Neil, Cathy. Weapons of Math Destruction. Goodreads.com
And even from the tech giants, facial recognition AI as it currently stands is laughably inaccurate.
Just because AI is in use in some areas does not necessarily mean it should be. I'm all for it for applications such as sorting photos, or suggesting less-important files to delete, or something like that. In this case, the good usage rests on 2 things:
1. The non-life-altering consequences of the application if it were to get it badly wrong
2. The ultimate choice of action being presented to the user and not automatically acted upon by the AI