What Is AI Doing to Us?
I recently came across this article:
Which referenced this paper:
To remind you, the Dunning-Kruger Effect is when a person with limited intelligence thinks their poor performance is superior because they are not smart enough to recognize their lack of ability. Essentially, this is the definition of false confidence.
The article pointed out that AI lowers the bar. Meaning that an intelligent person who uses AI to generate work is lulled into thinking poor results are of better quality. IE, using AI reduces our ability to evaluate our work.
I found the article a fascinating read, but it led to a question. What about me? Am I affected, and if so, how badly? I have shared in prior articles that I do not like AI-generated content because it contains errors, and I do not enjoy its long-winded style. And I certainly do not present AI-generated work as my own.
Yet, I do enjoy some AI-generated videos and some artwork. I use AI in the narrow application of inspiring (not writing) character/scene descriptions and vetting my outlines. I have never directly used AI-generated content except for a few articles where my readers knew in advance what I was up to.
During those limited uses, how did I feel about the result? Was I confident about those few articles with AI content? How about those outlines and scenes that were AI-inspired?
It turns out that I am doing some outline work this week, and I have used AI to vet some of my ideas. Here is a made-up example of how I use AI for outlining. “My character is a butcher at the local store. List some job obstacles she might face.” “Her favorite boning knife snapped in half, and a replacement will arrive in one week.” “The floor had water on it, and she slipped, hurting her ankle.” I would then think, “Yeah, I can see that breaking a knife would be a good reason to be upset. I’ll put that in the outline.”
How does using AI to help my outline make me feel? During my outline work, I saw improvement, which indeed felt good, which means I do have confidence in my result. This kind of proves the Dunning-Kruger Effect is present, but I do not think this is quite the point, because I am not submitting AI-generated material as my own.
What about the articles I have written that had AI-generated content? In a recent one, I compared multiple AI chatbots for their ability to improve text. This article was well received, and I was proud of the result. But was it good? Well, some time has passed, so let’s find out.
After reviewing the article, I did spot some areas that needed cleanup, but those were my words, not the AI-generated ones. Still, I liked the concept and the execution. To me, the AI-generated elements, while critical to the article, did not matter because I had no control over the content.
This did not seem to be getting to the core of the subject. So, I pivoted and used ChatGPT to write an article. As you are 100% aware, I wrote this article last week:

I asked ChatGPT, “Write an article about how Bad Ethics leads to Good Fiction.” And it immediately failed because it missed the entire point. ChatGPT wrote an excellent list of unethical traits in characters that would lead to exciting plots. Plus, it was not an article; it was a list.
I tweaked the text prompt five times to better capture the concept, and the result was nearly identical. To fix the issue, I entered the prompts into Microsoft Copilot and Google Gemini, but without success. AI completely failed.
This made me happy that I was a better writer than AI, but that was not my goal. I pivoted again and looked at an article from 2023, which described my first use of ChatGPT.
My evaluation was that this was a well-written article by my standards. Nice. And the AI-generated content? Also, well done. But what I am missing is an evaluation of the original AI-generated quality. So, I asked ChatGPT, “Evaluate the quality of this story.”

Prose & Style:
Strengths
Clear, readable, and accessible.
Efficient pacing and cause-and-effect flow.
No grammatical issues or confusing sentences.

Weaknesses
The prose is mostly serviceable rather than vivid.
Many phrases are abstract (“fallen from grace,” “drowning in self-pity,” “incredible life”).
Informational telling outweighs sensory detail and dramatization.
Emotion is summarized rather than shown.

Chat GPT gave its own story a mixed bag, yet I was impressed with the original which is giving some weight to me succumbing to the Dunning Kruger Effect. Still, I need a little more convincing.
So, I took a walk and thought about the issue. It occurred to me that being creative is hard and using AI is easy. Thus, the more intelligent a person is, the faster they create good material. Is this the source? “This AI generated text came out fast. It must be really good.” I suppose it is.
This makes sense because that 2023 article practically wrote itself. So, yes, AI is upping my sense of quality and the study is valid for me. Wow, this was not what I was expecting. I had hoped that I could somehow prove AI had no effect upon me, but it does. This is going to give me a lot to think about.

You’re the best -Bill
December 03, 2025


BUY MY BOOK
Read my next blog.
The Batman Effect

Follow me







Copyright © 2025 Bill Conrad