We Are Not Doing AI

We Are Not Doing AI

By the actualNod--6 min read

TL;DR

Most companies aren’t “doing AI”, they’re building software that happens to use AI. Confusing these two leads to poor decisions, wasted resources, and unnecessary complexity. Treat generative AI as just another component in the stack, not a whole new discipline, and projects will make a lot more sense.

With the uncontrolled roll-out of Large Language Models (LLMs) to the general public, many of us can now say "I'm doing AI".

The Allure of Technology Democratization

Companies released products which effective use would require high level of skill, knowledge and years of experience using them and tweaked the user experience so that the masses could have access to it.

There is something almost magical about turning something complex into something accessible. It feels like progress. It is progress. But it also comes with a hidden cost: abstraction.

When companies package extremely sophisticated systems behind a clean UI and a chat box, they are not simplifying the problem, they are hiding it.

It’s like driving a car without knowing how an engine works. We are using it, but we are not building it, maintaining it, or even understanding its limitations.

And that’s where the illusion begins.

Because when everyone can “use AI”, a dangerous narrative emerges: that everyone can also build with AI. The gap between usage and creation gets blurred. Management hears “we can all use it” and translates that into “we can all build it internally”.

That’s like assuming that because our teams knows how to use Google Sheets, they can now go ahead and build a database from scratch. "It's just rows and columns guys".

Spoiler: we can’t.

How Temptation Impacts Businesses

This is a topic that I am truly fascinated with. Since I sincerely do not understand it.

Back in the days when I delved into Forex Trading, many of the books that I read mentioned that:

The only emotion that is healthy to experience while trading is no emotion." - Trading in the Zone, Mark Douglas

So when it comes to valuating AI projects, most of their projections are based on expected adoption.

And yet, here we are watching entire organizations make emotionally-driven decisions around AI. FOMO (Fear of missing Out) becomes a line item in strategic roadmaps.

Companies see competitors announcing “AI-powered features” and suddenly there is an internal urgency to “do something with AI”. No clear problem definition. No understanding of where value is created. Just movement for the sake of not being left behind.

This is where things start to break.

Because AI projects are being greenlit not as engineering initiatives, but as branding exercises. Teams are assembled with titles like “AI Lead”, “AI Engineer”, or “AI Architects”, without a clear integration into existing software development practices.

The Unfortunate Evolution of Language

When many people communicate in a certain way, it becomes "a new norm". This happens with slang and has escalated to the corporate environment.

Words lose precision over time, and in technology, that is a problem.

“Doing AI” has become one of those phrases that sounds impressive but means absolutely nothing. It compresses a wide spectrum of disciplines:

  • Data Engineering
  • Machine Learning
  • Software Architecture
  • Backend Engineering
  • UX/UI
  • Security
  • Infrastructure and Operations into a single, vague label.

And when language becomes that vague, thinking becomes vague.

If a manager says “we need to invest in AI”, what does that actually mean? Are we talking about building models? Integrating APIs? Improving data pipelines? Enhancing UX with AI-assisted features?

Without clarity, teams end up solving the wrong problems.

It’s like telling a construction crew “we need to improve housing” without specifying whether we need better foundations, stronger materials, or smarter layouts.

We are not getting innovation. We get confusion… with a budget.

We don't DO AI

Most of the "AI" we talk about (which it is generally Generative AI or the use of LLMs) is already done for us. But let’s call it what it is: dependency.

The vast majority of companies are not building models, they are consuming them. They are integrating APIs provided by a handful of organizations that have the infrastructure, talent, and capital to train these systems.

This is why there are only a handful of companies that dedicate themselves to provide LLMs. If we would take on the task of training our own LLMs, we would need a huge capital investment not only on the bare metal needed to run it, but the army of people needed to gather and curate the data to feed into our training process.

Which means the actual challenge is not “doing AI”. It’s building reliable, scalable software systems that use AI components effectively.

That includes handling failures (because LLMs fail), validating outputs (because hallucinations exist), designing user experiences around uncertainty, and ensuring that the system behaves consistently under real-world conditions.

In other words… software engineering.

Saying “we are doing AI” in this context is like saying “we are doing electricity” because our product uses a power outlet.

No. We are building something that uses electricity.

So what do we do?

The correct way to put it would be to say that we are developing software in which one of its components requires generative AI.

And that shift in wording is not just semantic... it’s operational.

Because once we frame it correctly, everything else falls into place:

  • We don’t need to reinvent our org chart.
  • We don’t need entirely new disciplines.
  • We don’t need to panic-hire “AI specialists” without a clear role.

What we need is strong software engineering practices.

We need backend engineers who understand how to integrate APIs reliably.
We need product managers who can define real problems worth solving.
We need data-aware teams that understand the limitations of probabilistic systems.

And yes, we might need specialists but they should complement our existing structure, not replace it.

We need to think about generative AI as a new library in our tech stack, not as a new religion the company must convert to.

Because the moment we treat it like something mystical, we stop reasoning about it logically.

Final Words

We are not "doing AI."

We are building software. Sometimes good, sometimes bad, and occasionally... we are plugging in very powerful tools that we barely understand.

The danger is not in the technology itself. It’s in the way we frame it.

If management continues to treat AI as a separate domain that requires entirely new structures, roles, and strategies, we will keep seeing failed projects, wasted budgets, and disillusioned teams.

But if we ground it back into what it actually is... a component within a broader system; we might start making better decisions.

So maybe the real question is not:

“Are we doing AI?” But rather: “Do we actually understand what we are building?”

P.S. "Agentic AI" is just generative AI driven automation...