From chatbots to Agentic AI: The rise of autonomous collaborators

When ChatGPT burst into the scene in late 2022, it dazzled the world with how Artificial Intelligence could generate human-like text, code and conversation.
But it very quickly revealed its limitations and weaknesses, namely hallucinations, mathematical challenges, brittleness and decision-making that no one could fully explain.
The focus today is on solving these weaknesses and the road there is through a new wave of AI research that zeroes in on reasoning, meaning the ability to analyze, plan and make better decisions.
The first white paper in our Advancing AI series, AI Reasoning: A Vision Forward for Networks, argues that this shift is not just a technical upgrade but rather a step toward AI systems that benefit the telecommunications industry and beyond, and which both businesses and society can truly rely on.
Why Reasoning Matters
Reasoning is what turns useful AI into trustworthy AI. Instead of just recognizing patterns in data, reasoning AI can weigh options, test ideas and explain its choices.
The chatbot that launched in 2022 was a Large Language Model (LLM) generative AI that was able to do amazing things like providing search results in prose, creating poems and storylines, even illustrating new images. But reasoning is what will get the AI to “think” through a problem posed to it and come up with a reasonable answer.
When you are casually using ChatGPT, false and misleading answers are merely inconvenient. But in a business setting, it can be costly and risky.
For telecommunications networks, stronger reasoning could forge a path to self-healing systems that automatically fix outages and provide smarter resource allocation and customer service that intelligently adapt in real time. For the business side, it could support everything from supply chain optimization to financial risk analysis.
In short, better reasoning leads to better decisions and a stronger foundation for autonomous systems.
Most chatbots are assistants, not collaborators, since they rely on pattern recognition. Early AI models appeared clever but were limited.
Now, the field is moving toward AI agents that can act, make plans and even set their own goals. Reasoning is our gateway toward agentic AI, in which the machines will be able to work more autonomously without human direction. It puts us on that threshold of where AI can not only “say” things but “do” them, too.
In other words, reasoning transforms AI from a passive assistant into an intelligent partner.
Smarter AI, Not Just Bigger AI
Recent breakthroughs have proven that bigger isn’t always better. DeepSeek-R1, for instance, stunned the industry earlier this year by rivaling the best models while using a fraction of the cost. Similarly, small language models (SLMs) with clever reasoning strategies have outperformed giants on complex math tasks.
Our white paper outlines three key scaling approaches shaping today’s reasoning models. The first is pre-training scaling, which makes models larger with more data and compute power. The second is post-training scaling, which refines models through techniques like reinforcement learning with human feedback. Finally, there is test-time scaling, which extends reasoning via methods like chain-of-thought prompting and majority voting.
Beyond scaling, researchers are also exploring Large Concept Models (LCMs) that operate at higher semantic levels, non-linguistic modeling concepts in latent space, as well as new architectures that either augment transformers or are completely novel.
While many current reasoning models are performing exceptionally well against established benchmarks, such as Massive Multitask Language Understanding (MMLU) that test knowledge, they are faring poorly against ‘intelligence’ benchmarks such as Abstract and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) or Humanity’s Last Exam (HLE).
Improvements in reasoning will have major impacts on business. They should be influential in the telecom and networking sphere in terms of autonomous troubleshooting, resource allocation and energy optimization.
To be sure, plenty of challenges remain, particularly regarding trust, transparency, adaptability and efficiency. AI must reduce hallucinations and errors, its decisions need to be explainable rather than hidden in a “black box,” its systems must be able to handle new data and real-world surprises, and all this must be done without skyrocketing costs or energy demands.
Toward AI Wisdom
In this evolution from chatbots to agents to agentic AI and beyond, the goal isn’t just smarter tools but what we call AI wisdom. These are systems that don’t just calculate the fastest route, but also consider context, ethics and long-term impact. These are networks that self-operate, meaning they don’t just connect but actively manage and optimize themselves.
If successful, this evolution could transform how networks are managed, how businesses make decisions and how society solves problems. The key, we argue, is to keep the following principles in mind: universality, flexibility, explainability and scalability.
For business leaders, the message is clear. The potential is huge for a new wave of AI that won’t just be about faster answers but about better decisions. Those who prepare to harness it will gain a powerful advantage in a world that increasingly depends on intelligent, autonomous and reliable systems.