Microsoft ends support for Internet Explorer on June 16, 2022.
We recommend using one of the browsers listed below.
Please contact your browser provider for download and installation instructions.
As humans, let's face it, our thinking is a bit messy. When we try to decide something, from what to order at the restaurant to how to put together a business plan, it rarely comes down to a simple yes or no. It's usually more like “Yes, but,” or “Maybe. On the other hand,” followed by several contrasting thoughts that need to be considered. We think about cost versus quality, speed versus care. We talk things through, come back to what we were thinking earlier, change our minds, and sometimes have to take a beat before coming back to the decision. Messy thinking is what makes us human, though.
But computers? We think of them giving binary decisions. Clear and certain. Computers are incredible at processing large volumes of information quickly and consistently, executing clear instructions and working toward a single objective. They're not so good at making decisions in situations that resemble real human problem solving, however, where multiple, perhaps slightly conflicting perspectives need to be heard.
We can see it with today’s AI systems. Large language models are able to generate persuasive-sounding text and come up with seemingly plausible ideas, but many AI systems still look at complex tasks as a set of isolated subtasks. One agent or model handles one piece, another handles the next, and the results are stitched together at the end. That may be fine for routine or formulaic work, but not so good when joined-up thinking is required.
That's what NTT’s recent research into autonomous cooperative AI agents is all about. Instead of telling AI systems to work alone and then merge their outputs, the Service Innovation Laboratory Group is looking at what happens when multiple agents are allowed to behave more like a human team. Each agent is responsible for part of a task, but they can also talk to one another, understand what the others are trying to achieve, and gradually work together on what the final outcome should be.
Memory is a key part of NTT's research. Humans don't have to start from scratch every time we think or talk about something; we remember earlier conversations, who said what, which ideas were useful, and which things haven't yet been resolved. In what NTT researchers call the ACT (Autonomous Cooperative Team) framework, AI agents are used to try to do something similar. They can retain both episodic memory, which captures summaries of specific past interactions, and semantic memory, which stores abstracted knowledge derived from those experiences. Semantic memory is organized hierarchically, making it easier to reuse in future tasks, rather than being discarded once a single answer is produced.
The AI collaboration is also structured in a "human" way: agents begin with a team meeting, sharing their initial views and working out a common understanding of the task they have to perform. When they come up against something about which they are uncertain or that calls for expertise they don't have, they pause to consult specialist agents and fill in the gaps before returning to the group discussion. Only then do they move into a production phase, where individual proposals are brought together and checked for completeness and internal consistency.
In test evaluations, teams of agents working together were able to produce detailed and cohesive business plans, design proposals, and research outputs. Plans included realistic structures, constraints, and trade-offs. In other words, a very human-feeling "Yes, but" rather than "Yes/No."
Here's a specific example: tea.
Agents were asked to develop a complete business concept based around first choosing, then marketing a brand of tea. As they worked, different agents focused on their own concerns, such as flavor profiles, customer preferences, pricing, and sustainability. Working together, they converged on a single idea that balanced a number of competing viewpoints, defining a target audience, selecting specific tea blends and citrus flavor combinations, outlining tasting experiences and workshops, and settling on practical revenue options such as direct sales and subscriptions.
It was a project focused on tea, but it could have been anything; the point was to test the way the system weighed multiple, occasionally conflicting priorities and turned them into a coherent, workable plan.
We often read about AI becoming faster or more fluent, but in this case it wasn't a case of achieving greater speed. It was all to do with working out ways to reflect how people actually think together. Real-world problems involve negotiation, context, and compromise. The NTT Service Innovation Laboratory Group was able to provide AI agents with ways to share memory, question one another, and adjust their understanding as a group. Just like humans.
Innovating a Sustainable Future for People and Planet
For further information, please see this link:
https://group.ntt/en/newsrelease/2025/08/08/250808b.html
If you have any questions on the content of this article, please contact:
Public Relations
NTT Service Innovation Laboratory Group
https://tools.group.ntt/en/news/contact/index.php
Daniel O'Connor joined the NTT Group in 1999 when he began work as the Public Relations Manager of NTT Europe. While in London, he liaised with the local press, created the company's intranet site, wrote technical copy for industry magazines and managed exhibition stands from initial design to finished displays.
Later seconded to the headquarters of NTT Communications in Tokyo, he contributed to the company's first-ever winning of global telecoms awards and the digitalisation of internal company information exchange.
Since 2015 Daniel has created content for the Group's Global Leadership Institute, the One NTT Network and is currently working with NTT R&D teams to grow public understanding of the cutting-edge research undertaken by the NTT Group.