Chatbot conversation

Is My AI Getting Snarky?

By Troy Lowry

I swear my AI is starting to tease me. There are a few things that irk me in ChatGPT’s responses, so I had ChatGPT “turn them off.” ChatGPT allows a “Custom Instructions” section where you can add-in any instructions you want it to always follow. This is especially good if you have some particular format or style you always want it to work with.

My custom instructions start with the following: “For all responses, never, ever use the word ‘akin.’ Instead of saying ‘it's important to note’ or ‘it's critical to note,’ simply say ‘please note.’”

This was because when I would do something like ask what the symptoms for some disease are, it was always saying to me things like, “It’s critical to note that you should talk to a health professional immediately if you have these symptoms. Not doing so is akin to jumping out of an airplane without a parachute.”1

Having these items in my custom instructions has been great. I haven’t had to read these terms for months. Unfortunately, then my AI got snarky at me. I had asked it a routine question about a plan for replacing a CRM product, and I got this response:

It’s important to note that while I can provide you with a general plan for replacing your CRM, I won't be using the word ‘akin’ as per your request. Here are the major pieces you should consider for your CRM replacement plan:”

In other words, it went through pains and well off topic to mention exactly the things I asked it not to!

I have seen increasingly odd behavior from ChatGPT. On occasion I have thought that maybe ChatGPT was developing a sense of humor, but now I’m thinking instead of progression (and developing a sense of humor would definitely be a big step forward for AI) that it’s regression. It’s a sign that ChatGPT is losing some abilities it had — in this case, the ability to know that when I ask it not to say something, that means not to say it, even to acknowledge that it won’t say it!

The problem with any AI model is that it is reliant on the data that was used to train the model. The initial data from ChatGPT was scrubbed thoroughly by humans, including in ways that have questionable ethics opens in new window. But ChatGPT also is trained on data that its many millions of user input. That’s right, it’s learning from you even as you ask it to do your work for you! This data is used to train opens in new window ChatGPT unless you specifically set it up to not do so. 

The “Akin”ator Effect

There’s a phenomenon that I’ve come to call the “Akinator Effect” in reference to the AI-powered game Akinator opens in new window. Akinator is essentially a digital 20 questions game. A decade ago, when it was first launched, it was absolutely astounding. It could accurately identify even the most obscure characters from little-known works in fewer than 10 questions. We are talking Doris Crockford level of obscurity here.2 It seemed nearly impossible to stump the AI within the 20-question limit. Furthermore, it would learn from its “misses” through user feedback, constantly improving its abilities.

Over time, the AI’s performance took a nosedive. It began asking repetitive questions, which, although distinct in its database, were redundant to users. Additionally, the quality of its character data worsened significantly. Within just a year of gathering user input, the platform transformed from amazing to highly disappointing. No real chance of it guessing Doris these days!

With Akinator it was not the initial AI model, but the quality of ongoing training data it received that caused problems. It might be a totally different case with ChatGPT; nevertheless, the Akinator phenomena shows how AI performance does not always improve. In fact, it can degrade over time for a variety of reasons.

Or maybe, just maybe, ChatGPT IS developing a sense of humor and was toying with me.😊 It’s important to note that my relationship with ChatGPT is akin to a rollercoaster — filled with ups, downs, and unexpected turns. 🎢 And always enjoyable! 😁


  1. I made that last part up, but I do hate how instead of saying to things are similar it says they are “akin.”
  2. Doris Crockford appears briefly in the first Harry Potter book, “Harry Potter and the Philosopher’s Stone” (or “Sorcerer’s Stone” in the U.S.), when Harry first visits the Leaky Cauldron. Doris is extremely excited to meet Harry and shakes his hand multiple times, but she doesn't play a significant role in the series beyond that moment.

Troy Lowry

Senior Vice President of Technology Products, Chief Information Officer, and Chief Information Security Officer

Troy Lowry is senior vice president of technology products, chief information officer, and chief information security officer at the Law School Admission Council. He earned his BA from Northeastern University and his MBA from New York University.