Human Writers Have a Chance to Be Independent

Generative artificial intelligence infers the average from data. If AI works, it can’t be independent and go beyond the data that trained it, even if it says the opposite.

Luca Vettor
4 min readOct 3, 2023
Photo by Rolf van Root on Unsplash

I installed the ChatGPT app on my phone. That’s what I asked:

Create a 1000-word post that describes that you can go beyond the data that trained you and be independent and creative.

It responded offended but utterly politely.

ChatGPT promotes its independence of thinking and creativity with the following statements:

I’m magic — it says

[…] the true magic happens when AI starts to think outside the box.

What a tautological statement. I asked to describe how AI is independent and creative, not when the magic happens — if it happens.

If I were a wizard, I’d be magic. True. But what about the fact that I’m not a wizard?

I do not only regurgitate existing text — it says

Take, for instance, the field of natural language generation (NLG). NLG models like GPT-3, from which I am derived, can do much more than regurgitate existing text.

Here, ChatGPT is again dogmatic. It states to be creative, but it demonstrates nothing about the statement.

What would be writers' reputation if they only flaunt to deserve the best reputation?

Adapting is being independent — it says

AI can adapt to different contexts and domains, proving its independence.

This sentence is a small masterpiece of fraudulent marketing, as it assumes a false equivalence as if it were true:

adaptability = independence

Adaptability depends on the environment where the adaptation happens: how could something be independent of what forces it to change?

Innovating is analyzing data — it says

Innovation is another area where AI can shine independently. […] This innovation arises from the AI’s capacity to analyze vast datasets […].

The implication between analysis and innovation is superficial and generally false. Analysis supports innovation, but it’s not sufficient. A jump beyond the analyzed data is necessary for innovation; analysis is insufficient.

It fosters artistic creativity — it says

AI’s creative potential extends to art as well. Generative adversarial networks (GANs) are a prime example of AI systems fostering artistic creativity. These networks consist of two parts: a generator and a discriminator.

ChatGPT reduces art to an algorithmic dialog between a generator and a discriminator, presenting this reduction as a fact. Its mistake is always the same: it takes a partially agreeable statement and affirms it as a law of nature.

The ability to foster artistic creativity is not being creative, tout court.

It’s an independent decision-maker — it says

Beyond its creativity, AI can also exhibit independence in decision-making. Autonomous vehicles, for instance, rely on AI to navigate and make split-second decisions to ensure passenger safety.

Here, ChatGPT quotes autonomous vehicles as an example of independent decision-making. And it confuses independent with automated. Autonomous vehicles don’t make decisions; they calculate the best action — based on their training — given a context.

Again, ChatGPT sounds confident and common sense, but logically, it’s consistently inconsistent.

What I learned from chatting with ChatGPT about itself

ChatGPT reveals itself as a search button that pretends to be what’s not — intelligent — thanks to its human-like language skills. It says:

In conclusion, the idea that AI is bound solely by its training data is a limiting misconception. AI has the potential to break free from these initial constraints, showcasing its independence and creativity across various domains.

Don’t get me wrong. Search buttons are extremely helpful in daily life, and ChatGPT is a superb search button in which the automation is so refined that it gives the illusion of intelligence. But it’s just a search button.

Think of your old garret where, one day, you found that toy of your childhood that inspired you to find the solution to a problem you were working on. Would you say your garret is intelligent because it allowed you to find something valuable you sought?

Writers are first researchers. You search what’s available and cannot go beyond the available information. That’s the limit of each search and of search buttons — also known as AI.

But writers are human. They don’t write as a search result; they search to find new stories to tell and get a bit more awareness of what they are. Human writers have a purpose: this is their chance to be independent and go beyond.


Link to the ChatGPT answer I write about in this article:



Luca Vettor

My 24 years in the IT industry and physics degree flow into my mission: simplify what appears complex.