Human + AI > AI. ChatGPT Is Useful Because It “Thinks” Differently Than Humans Do.

By Troy Lowry

Anthropomorphism is defined as, “The attribution of human traits to things, such as animals, plants, or inanimate objects.” This concept is commonly used in literature, art, and religion to make non-human characters or things more relatable to humans.”

In this blog, I will talk about the tendency for people to attribute human qualities to objects including AI. I will advocate that instead of treating AI as human, we should recognize that it is not and celebrate the difference.

I have a couple of deep philosophical ideas that I mask as jokes.1 One of them is: “It’s awful the way we anthropomorphize our babies.”2 Take a second and really think about that. It’s ok; I’ll wait. I have time… Thinking deeply on this statement makes you really think about “What does it mean to be human?”

Obviously and undeniably babies are human. They may not speak or walk, but they are human. I have an adult autistic son who is barely verbal and struggles greatly to communicate, but he’s as human as they come. So, although many humans speak and walk, doing so is not part of what makes us human (to believe this would be to believe that newborns are not human).

On the other hand, most everyone would agree that ChatGPT is not human. Most AI experts say AI cannot think because it is, at its core, a bunch of statistical routines. You heard that right. At the heart of ChatGPT’s great logical reasoning and writing is statistics.3 This is no doubt a shock for all of you who went into English or law to avoid math. We should stop giving AI human characteristics.

There’s been a lot of talk recently about AI “hallucinations.” Put simply, AI hallucinations are when AI performs in erroneous or unexpected ways. This is in stark contrast to humans where hallucinations are defined as sensory experiences that appear real but are created by the mind and which are not based on external stimuli.

In other words, simply being wrong is not a hallucination, unless you are an AI. This is the reason I brought up anthropomorphism. Calling AI errors “hallucinations” is giving AI human characteristics and implying that they are human. Implying that AI is human by anthropomorphizing it overlooks AI’s greatest strength: that it comes to decisions differently than humans. As studies show, diverse groups outperform homogenous ones, and this goes for groups with an AI member as well. More on that later.

As I said, AI doesn’t have hallucinations, but they do have errors. These errors have several major sources.

First and foremost, since AI is built on statistics, or in a word, probability, any time you are dealing with probability there’s a chance, however unlikely, for wildly unlikely outcomes to happen. For instance, it is wildly unlikely that you will win the big jackpot in the Power Ball lottery (a 1 in 292.2 million chance), but it is possible. There are actual winners.4

In that same vein, probability is used both in the “training” of the AI, which is the way in which the AI “learns,” as well as the output to any queries. In the training, billions of random numbers are used to train the model.  All told, trillions of probabilities were used, and with so many numbers, strange things, often not easily reproducible, occur. If you bought 10 trillion PowerBall tickets, you’d win the jackpot thousands of times (on average 3,422 times). Given enough chances, even the wildly improbable happens.

Another major factor is that AI is trained by looking at giant amounts of text on the internet. It doesn’t take much experience with the internet to know that it is far from the font of perfect knowledge. Given the challenge of learning from such an imperfect teacher, ChatGPT and other AI does a remarkable job of being accurate most of the time.

Lastly, ChatGPT is programmed to give an answer. Like a child who is so eager to please they sometimes speak too quickly, ChatGPT can be so eager that it gives incomplete or downright inaccurate answers.

An example: I had asked ChatGPT about the meaning of the song “Lassie Come Home” by the band A-Ha. This is one of those songs that makes so little sense it is either an extremely deep song, or else it’s entirely meaningless, I highly suspect the later. ChatGPT responded with an amazingly rich answer, using specific lyrics from the song to support several specific themes about longing and home coming.

Unfortunately, I had made a mistake. The song I was thinking of was by the band Alphaville. A-Ha never recorded a song by this name. A quick google search didn’t find the lyrics ChatGPT quoted in any song by any artist. ChatGPT made them all up. It made up a very convincing set of fake lyrics to support non-existent themes for a fictional song.

Highly concerning, to be sure! However, my feeling is the hype about these mistakes has been taken too far. We are used to computers always giving us the exact same results when they have the same inputs, so we are not easily able to deal with different results, some small percentage of which are not factually correct.

I would counter that human writers and editors also make frequent mistakes. While they don’t usually make up song lyrics from whole cloth (I hope!), nonetheless they often get facts wrong or make edits I disagree with.

In either case, I must be vigilant and make sure that I agree with the suggestions made.

Human + AI > AI

My blog posts all share a common theme, which is that humans and AI working together do a better job than AI alone. Because AI works differently than human thought, it can add a new element that can be used to make your work stronger.

I take all my blog posts and run them through a private version of ChatGPT (for data privacy reasons), asking it how I could make my writing better.

It usually gives me between five and seven recommendations. I find myself only acting on about one in three recommendations.

This may seem like a poor hit rate, but it’s incredibly useful. In part because it makes these recommendations fast, returning them within a minute of when they're asked. If I asked a human editor for recommendations, they would likely have a better hit rate, but in the best case it would still take an hour or so to get results. Having immediate results of good quality are often more useful to me than more expert opinion later.

I still use the human editors but only after I use the AI editor. The results? The human editors constantly tell me that my writing quality is improving.

I, for one, am glad that AI “thought” isn’t the same as human thought. That difference allows us to work together to create a better end product than either of us5 could alone. Whether ChatGPT actually thinks or not, it is an incredibly helpful tool.

AI may “think” differently, but viva la difference!


  1. There is a rumor that I started this blog as an outlet for dad jokes. I will neither confirm nor deny whether this rumor is correct.
  2. One of my favorites is: “There are two types of people in the world. Those who create false dichotomies, and those of us who don’t.”
  3. Since we don’t fully understand how people “think,” it seems to me that human thought might all be based on statistics also.
  4. Another one of my philosophical statements masquerading as a cerebral joke is when I say I hope I win the lottery. Surprisingly often people will ask “Do you play?” To which I respond, “No. But that really doesn’t change my odds.” If someone presses the issue I will say, “I find them on the ground from time to time. Finding a ticket and winning the jackpot with it is just about as likely as hitting the jackpot if I buy a ticket.”
  5. Here I go anthropomorphizing AI by saying, “Us”! I don’t say “Us” when I use a calculator. At some basic level, AI just feels different, even if it doesn’t actually think.