Who Has Better Decision-Making Ability: Artificial Intelligence or Humans?
Who Has Better Decision-Making Ability: Artificial Intelligence or Humans?

Who Has Better Decision-Making Ability: Artificial Intelligence or Humans?

An argument for humans

Artificial intelligence has been around for a while. If you’ve ever asked Siri or Alexa a question, you’ve used AI—a helpful, if limited, add-on ‘service’. But lately, it’s exploded into mainstream consciousness due to ChatGPT.

Why ChatGPT now, when it’s been available to the public since November 2022? I believe the lawsuit between Google and Microsoft pushed the subject of artificial intelligence into the open, and it’s generally recognized that artificial intelligence is going into Phase 2.

Seven months from the launch of a product to it becoming a popular discussion topic is quick. In fact, this past week, one of the UK’s popular morning programs was devoted to artificial intelligence: what it is and what it isn’t, what it can and what it cannot do. The speed is astounding.

So, artificial intelligence is the talk of the town—in the UK, at least. Some people express fear of a potential dystopia where AI takes over. Some express curiosity while others dismiss it.

Me? Being a dystopian writer who incorporates tech into her stories, I’m not concerned.

That might sound strange coming from me, so I’ll expand on that.

As part of my research for book one of the Rising World series, The Truth Effect, I learned as much as I could about artificial intelligence. That was about five years ago when information about AI was scarce. Since then, the research on AI is more accessible, and products are advancing. And yet, still, I’m not worried about artificial intelligence.

I’ll tell you what worries me, though. When I hear a man or woman say something like, “AI is better than humans at decision-making.”

That worries me.

And then, I disagree vigorously.

We are sentient beings who have ethical issues to consider in our daily lives, and ethics color our perceptions. We are not less than or slower than AI because we do that. AI doesn’t have to concern itself with whether to make nut-free apple pie for the friends and family picnic, or the emotional ramifications of honest criticism about Aunt Melba’s artwork that looks like she just tossed paint at the canvas. Humans naturally take ethical and moral aspects of life into consideration.

AI doesn’t know Aunt Melba’s sensitivities, and it would most likely spit out an analysis of the strokes anyway. Not much feeling there.

On a mass level, AI cannot take the pulse of a nation and then make beneficial policy that will move us forward.

The British Government wants to make the UK a world leader in AI, in some fashion. Though they haven’t detailed what that goal will look like, they hope to make Maths central to the school curriculum. I talked about it in my blog post 2023: A vision for the UK.

That is quick decision making—despite a government weighed down by scandal and bureaucracy and vested interests and agendas. (I ignore the critics of this policy, and I will throw a vote for this government, and any future government, that includes this AI policy in its manifesto.)

I asked ChatGPT what limits it should have. Its answer: “As an AI language model, ChatGPT should have certain limits to ensure responsible and ethical use.” It went on to detail various ethical considerations.

Machines don’t feel and nor can they make moral choices. So, who’s programming the ethics of a machine? And since the age in which we live colors those ethics, who is deciding on the ethics programming? And, what side of the political spectrum will these ethics come from? Can people update the ethics program on a regular basis?

As far as I can see, people are making all these decisions.

Humans have been making decisions, alongside moral considerations, automatically, for as long as we’ve been on this planet. On a mass level, we debate one another and then people vote.

Point to humans.

Neuroscientists are studying us to make AI, and cyber scientists are literally replicating our brains’ functions for AI. Nature is superior to a replicant of nature.

Point to humans.

Finally, human potential is barely being tapped.

It used to be believed that the human brain stopped developing at 25 years old or so. Advanced instruments show that neuroplasticity is present at any age.

We know more about ourselves now than we did a hundred years ago (and yes, it bothers me that it took ambitious scientists who wanted to build AI to reach that understanding, but I’ll leave that detail on the battlefield). We can now take that information, and we can make the most of ourselves, by choice.

No artificial intelligence machine can do that.

Point to humans.

Through self-aware action, we can choose deliberate habits, increase our neuroplasticity, and develop ourselves at our own will. Humans are way ahead of AI, and always will be because we have the power of free will.

Artificial intelligence has no choice. It gets programmed.

Mega points to humans.

AI, a superior decision maker? It’s a pre-programmed set of algorithms that gather information that we’ve already thought of and it spits it out at us. Humans can create new ideas naturally and creatively.

And now that we have so much more self-awareness, we can strengthen ourselves, too.

Leave a Reply

Your email address will not be published. Required fields are marked *