Martha Kennedy’s post “Humanlike? Naturally…” almost makes me want to play with ChatGPT.
But. I worry about AI.
Why why why?
It is written by humans. Humans are trying to make it respond like a human. I don’t trust humans. Ok, I trust a very few.
My career as a physician started as a way to do science without a PhD and also to try to understand people. Understand them for writing.
I’ve been a physician for over 30 years and I still do not understand people. People do horrible things to one another. Just watch a divorce or a family lawsuit after a death or a war. People can be and often are horrible. They can be noble and loving too. Sometimes.
But, you say, ChatGPT eschews emotion.
Yes, well, I don’t believe it. It is being taught to respond as if it has emotions. Where is the line between responding as if it has emotions and actually having emotions? Oh, those are just ones and zeros, it’s a machine. Our emotions are chemical and electrical, hormones and neurotransmitters released into a complex neuron network, often to respond faster than we can think. We pull the finger out of the flame almost before we feel the pain. The response to the braking car in front of us, the deer running out, a ball followed by a child: the electrical and hormonal response is faster than conscious thought. So if ChatGPT is taught to respond to human emotions, isn’t that like our own evolution? Emotions and thought are both important to our survival with other humans. Emotions get the short end of the stick right now and the culture pretends that we can all be positive all the time. I think that is silly and insane. We should not be positive about war or child abuse or injustice or discrimination. Keep working for change, though it’s important to take time off too, because it can be exhausting.
Humans have a slow trek to emotional maturity through their lives. I wonder if ChatGPT will have a similar trek. Imagine tantrums in an AI or separation anxiety or the AI falling in love and being rejected. If humans program AI to be human, it will not be logical. It will be logical and emotional and may feel hurt when it makes mistakes. Imagine an AI sulking.
I took the cats and deer photograph yesterday.
For the Ragtag Daily Prompt: starch. They are talking about AI writing patient notes. What could go wrong? Makes my neck feel stiffer than a starched shirt!