³Ô¹ÏÍøÕ¾

AI chatbots are bad at planning, but this could soon change

We might soon see AI step up to the next level, with developed by and . OpenAI’s GPT-5 will be the new “engine” within the AI chatbot , while Meta’s upgrade will be named Llama 3. Among other things, the powers chatbots on Meta’s social media platforms.

Author


  • Nello Cristianini

    Professor of Artificial Intelligence, University of Bath

Statements to the media by executives at both OpenAI and Meta suggest that some ability to plan ahead will be incorporated into these upgraded systems. But how exactly will this innovation change the capabilities of AI chatbots?

Imagine you are driving from home to work and want to select the best route – that is, the sequence of choices that is optimal in some sense, based on cost or timing, for example. An AI system would be perfectly capable of choosing the better of two existing routes. But it would be a far more difficult task for it to generate the optimal route from scratch.

A route ultimately consists of a sequence of different choices. However, making individual decisions in isolation is not likely to lead to an optimal overall solution.

For instance, sometimes you have to make a little sacrifice at the start, to reap some benefit later on: maybe joining a slow queue to enter the motorway, in order to move faster later on. This is the essence of a planning problem, a classic topic in artificial intelligence.

There are parallels here with : the outcome of a match depends on the overall sequence of moves, and some moves are aimed at unlocking opportunities that can be exploited later on.

The AI company developed a , based on an innovative approach to planning. It was not only able to explore a tree of available options, but also to improve on that ability with experience.

Of course, the real point is not about finding optimal routes for driving or playing games. The technology that powers products such as ChatGPT and Llama 3 are called . What is at stake here is providing these AI systems with the ability to consider the long term consequences of their actions. This skill is also , so it potentially unlocks other capabilities for LLMs.

Large language models are designed to predict the next word in a given sequence of words. But in practice, they are used to predict long series of words, such as the answers to questions from human users.

This is currently done by adding one word to the answer, then another word and so on, thereby extending the initial sequence. This is known in the jargon as . However, LLMs can sometimes paint themselves into corners that are impossible to get out of.

Expected development

An important goal for LLM designers has been to , the type of algorithms – or set of rules – that sit behind the models. Deep neural networks were originally inspired by the nervous system. They can improve at what they do through a process called training, where they are exposed to large sets of data.

The wait for LLMs that can plan might be over, according to the comments by OpenAI and Meta executives. However, this comes as no surprise to AI researchers, who have been expecting such a development for some time.

Late last year, OpenAI’s CEO Sam Altman . At the time, the drama was rumoured to have involved the company’s development of an , although this explanation has . Although it’s not clear what Q* does, at the time, the name rang bells with AI researchers because it echoed names for existing methods for planning.

Commenting on those rumours, Meta’s head of AI, that replacing the process of auto regression with planning in LLMs was challenging, but that almost every top lab was working on it. He also thought it was likely that Q* was OpenAI’s attempt to incorporate planning into its LLMs.

LeCun was onto something in what he said about the top labs, because recently, Google DeepMind that hinted at planning capabilities.

Intriguingly, the listed inventors were members of the AlphaGo team. The method described in the application looks much like the one that guides AlphaGo towards its goals. It would also be compatible with the current neural network architectures used by large language models.

That brings us to the comments by executives at Meta and OpenAI about the capabilities of their upgrades. Joelle Pineau, vice-president of AI research at Meta, : “We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan . . . to have memory.”

If that works, we might well see progress on planning and reasoning, moving from simple, step-by-step word generation to planning entire conversations, or even negotiations. Then we might really see AI step up to the next level.

The Conversation

Nello Cristianini does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

/Courtesy of The Conversation. View in full .