Cult of Person-AI-lity

Cult of Person-AI-lity

By the actualNod--9 min read

Look in my eyes! What do you see? 👀

Cult of personality: a deliberately created system of art, symbolism, and ritual centered on the institutionalized quasi-religious glorification of specific individual.

Dear gentle reader (sorry but I have been watching too much Bridgeton lately), there has been a fundamental change of how we, as a species are behaving and thinking thanks to the use of tools.

Tools have made us lazy

In physics, objects that have the capacity of transforming energy, tend to look for the path of least resistance. This also includes human beings, specially when it comes to decision making (now-a-days we call that lazyness).

And it makes sense, often we look into reducing the effort of decision making to focus on other more important/valuable activities or matters.

Using tools is an evolutionary trade to make our lives more comfortable, and also to enable us to innovate further (or at least they should).

But as always, we owe it to humanity to take great things and come up with the worst outcomes (this is why I'm so interested in game theory, but that's for another time.)

There is one phrase that I find accurate to support this article:

If the only tool you have is a hammer, you tend to see every problem as a nail

For my point, i'd rather change the wording of this phrase from have to know.

Your aunt told me

One thing I find incredible, is the fact that the concept of Artificial Intelligence (AI) has existed since 1956 and yet humanity still does not understand it well.

I don't plan on extending this article by explaining what AI is (there are complete book series, articles, videos and podcasts that make a much better job), but people rather accept half-baked truths than to actually learn about the topic.

For one, people half-trying to understand something and having the lack of self-judgement to ask themselves if new ideas or concepts are to be accepted as they come or should be further reasoned, tend to take these ideas and concepts at "face value" which can result in miscommunication.

People that do not know about AI can easily be fooled by people that think they know about it.

And miscommunication very often leads to misinformation.

Many people that lack that sense of critical thinking accept the half-baked truths they receive.

This is also part of why Social Media has been detrimental to society (I'm not saying it is all bad. But, until now, it has not been proven a net positive).

Confident, "I think I know better about the subject" kind of people, communicate a poorly understood idea or concept. This reaches the masses through social media, and when the masses lack the critical thinking and accept these miscommunication... it turns into misinformation.

The biggest challenge is that, once and idea is accepted by the masses, it is truly difficult to change. Because, although critical thinking is a straight forward process, it takes time and energy.

Critical Thinking process:

Unfortunatly, this is a long lost practice by now.

Garbage-in / Garbage-out

In the world of Data, there's the "Garbage-in / Garbage-out" phenomenon (to put it nicely), this principle states that the quality of the output of a process or solution is determined by the quality of their inputs.

If we can connect this principle with the idea from the previous point, we can sort of infer where this is going. If the majority of people turn out to be too lazy to think for themselves and forgo to critically think about the information they accept, sooner or later most of the information spreading will be of bad quality.

This is not new... This practice is also presented in the form of rumors. The big difference is that rumors tend to come with the predisposition of people not caring much for the veracity of it. While misinformation or "garbage" information comes from a genuine place of people believing on it truthfulness.

Generative AI and Large Language Models (LLMs from now own) are not exempt from this. Their usefulness is based on the data it is used to train them.

The good thing is that Math is on our (and by our I mean humanity's) side. LLMs are nothing but probability machines. So it is technically impossible to create an inherently evil Artificial Ingelligence. The thing is, the last statement has, sadly, proven a fallacy. All because the actual data used to train these models was, in its majority, coming from non-critical thinkers.

Is this real?

Since LLMs are just giant prediction machines, there is a probability that they get some things wrong.

This is what computer scientists came to call "hallucinations".

And now, the things that can add up to form a perfectly form shitstorm of misinformation are becomming pretty clear.

  1. Condifently wrong humans spreading misinformation
  2. Lazy non-critically thinking humans accepting informtation at face value
  3. Generative AI based companies using garbage information to train their products
  4. Same lazy humans in point two consuming said product, plus some of them being a subset of point one.

-- AND THE CYCLE REPEATS --

Since Generative AI companies do need to keep up with recent trends and information to offer up-to-date products, the new misinformation being spread by confident and non-critical thinking humans is used to feed new models.

And I wonder... How long would it take for a simple search in a search engine (which on most browsers now defaults to an LLM based answer)?, will turn to be confidently wrong most of the time.

An average world

Coming back to the point that LLMs are nothing more but probability machines trained on data that has already been produced, ther is also the point of lack of innovation.

The calculations that these algorithms go through make it so that the most common and predictible "next in line" option is chosen as output. This make it so that the output generated by LLMs generate average solutions.

And with newer LLM versions going multi-modal (meaning they can not only process text, but audio and video too) and people finding new ways of leveraging these products to facilitate their daily activities, the amount of output of AI generated content is increasing exponentially.

And this output is then given back as training information to the models, which in turn create a flatter average curve.

This is one of the differenciators that could give humans an edge over LLMs. And truthfully, the thing that still keeps my hopes up.

I want to believe that at some point, we will be able to go back and give more value to products or services that qualify as having the "imperfect/perfect" touch of human creativity than to "the best possible average" outcomes a machine could calculate.

That being said, I am not absolutely anti-AI. But I do feel that we are stretching its usefulness to solve problems that should be tackled by it.

The Great Enshittification

We are just starting to surf in a rising wave of new operating models of how Generative AI companies are looking for commercial outlets to provide profits for their shareholders.

What people do not realize, is that, if unregulated, the companies controlling how these content is produced might be able to embed other commercial products in the form of advertisements, or even worse, tailor specific results to serve politically driven agendas.

It would not surprise me if in a non-distant future, when you try using generative AI though a vendor like OpenAI, Anthropic or X and you ask it to provide a meal plan so you can reach your target weight, it will recommend only specific brands of food from companies that partner with these service providers.

Good Idea / Bad Idea

✅ Good ideas for Generative AI usage:

  • Creative inspiration: Asking it for topics on what to write your next article
  • Learning Paths: Create a customized learning path for the skill you want to learn about
  • Summarizing content: Sometimes, we don't get to have our daily couple of hours to lurk around different forums and read all the news we are interested in. Letting the AI summarizing content for us can be a huge time saver.

❌ Bad ideas for Generative AI usage:

  • Use it to write an entire article for you. People tend to notice when something is written by AI and it damages your credibility
  • Using it for something where you should consult with a professional. Creating a dietary plan without consulting a professional can result in permanent damage to your health.
  • Letting it automate stuff for us with full access. You don't want to program an AI to handle your investments portfolio only to wake up to an empty bank account.

Some final words

It is true that models keep getting better and better, but how people use them and the information that ultimately is used to re-train them is objectively worse.

And this article only focuses on the use of AI as the vast majority of people are getting used to. Which is either through a search engine or the different web and smartphone applications the Generative AI companies offer today.

My recommendation is to use these products and take their answers "with a grain of salt". Be critical about the content it ouputs and learn how and for what to use this solution.