Are we being driven towards extinction through the use of AI?

By | January 12, 2025

I know the title of this post may sound like what a conspiracy theorist would have said. 

However, even if we are being driven to extinction through the use of AI, I don’t think it’s being done intentionally. That makes it anything but conspiracy. 

The world around us is largely ignorant of our desire for it to evolve in some particular way. That involves climate change, pandemics, poverty, wars… and now we have AI. 

At this point all the big companies such as Microsoft, Google, Open AI, Facebook have been locked into a form of competition: whose AI systems are going to be better, smarter, faster, etc. None of them will want to back down yet, at least not until one of them has experienced a huge failure. Which, surprisingly, has not happened yet. 

But it may be coming. “Agentic AI” is the next big thing everyone is looking into. Gartner is calling it the top technology trend, so, whether we want it or not, we’ll have to live through the consequences and see how it works out – you can read about how things look like (including what the risks are) here, for example:

(It’s a good reading material, though definitely not the only one of its kind). 

However, there seems to be a conceptual problem with Agentic AI, which is that the ultimate purpose of it is to replace humans, and it’s a lose/lose situation. 

If this ends up not working out and our expectations of the AI have been greatly exaggerated, then we will end up with a lot of time and resources spent on yet another technology  “bubble”. So, when and if that bubble bursts, what is going to happen to all those organizations that were betting on it? (For example: https://www.cxtoday.com/data-analytics/microsoft-ceo-ai-agents-will-transform-saas-as-we-know-it/)

On the other hand, what if this works out? The measure of success in the Agentic AI field seems to be tied to the agents’ ability to independently perform tasks which are normally reserved for human beings. But just consider: once you can reliably delegate some of those tasks to the AI agents, you can let go of all the people who used to be responsible for performing such tasks. In which case all those interpreters, product support folks, software developers, travel agents, financial planners, taxi drivers, etc… may have to find some other way of making money. Congrats to them!

And the sad thing is, one may think that AI specialists are going to be in demand. But, you know, it depends on how quickly an AI Agent will be developed to replace such specialists.

Besides, applied AI is way different from the science behind it. Just look at all the training material Microsoft has come up with, for example: AI learning hub – Start your AI learning journey, and build practical AI skills to use right away. | Microsoft Learn

There is a lot there, but, if you start digging into it, you’ll realize that:

  • At the core, there are AI Models. They typically require a lot of resources to train them, so, for the most part, we don’t have control of the relatively complex models
  • Other than that, it’s, mostly, all about what models are available and what tools/API-s we can use to utilize those models

Fundamentally, though, we have no control of how those models operate, what they do, and how they do it. So, yes, we can utilize the models, but, if we are not satisfied with the outcomes, what do we even do? Of course we can try fine-tuning the models, or implementing RAG, or work on some prompt engineering (which is not, really, “engineering”, since engineering is typically based on science, and prompt engineering is all about “trial and error”).

The outcomes of such projects won’t necessarily be predictable, though, so they might not be even feasible to start in many cases.

And that’s very different from how we used to work in IT where we’d set some goals and start moving towards them (we could have missed the estimates, but, at least, we’d know that something is achievable once we’ve allocated enough time and resources). 

So, then, other than the big corps who can afford to develop new models and provide paid access to the API-s, who and how is actually going to win from possible advances in the “applied AI”? 

Category: AI Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *