Microsoft ends support for Internet Explorer on June 16, 2022.
We recommend using one of the browsers listed below.
Please contact your browser provider for download and installation instructions.
Whether reading the technical sections of trade publications or looking through glossy lifestyle magazines, there's a topic we are all seeing more and more often: artificial intelligence.
In particular, there is a huge focus on generative AI, a subset of artificial intelligence that uses machine learning models to generate new content that could convincingly have been created by a human. This type of AI can create high-quality, unique output based on the patterns it learns from input data, with generated content in various forms, from articles, poetry and stories to artwork and design elements, music and even video. The release of ChatGPT last year accelerated the boom in interest and underscored the importance of large language models to natural language processing and the AI field.
NTT is on the verge of introducing its own, unique generative AI technology.
The NTT Group has been conducting research on natural language processing for over 40 years, developing large-scale language models initially through the use of basic Japanese language models. In November 2020, we debuted a free large language model with 1.6 billion parameters that we trained using extensive web dialogue data and high-quality dialogue data developed through research on Japanese dialogue systems.
We believe that generative AI, particularly large language models, will be a leading technology in the AI field for the foreseeable future. Given our extensive research and expertise in natural language processing, we see the rising demand for generative AI as an opportunity to utilize our technological capabilities.
However, we are not going to simply attempt to copy the gigantic language models created by other leading technology companies. Instead, we plan to differentiate ourselves by developing generative AI that has a purpose.
We are looking to help create a cooperative society by fostering collective intelligence from small AI systems, each possessing unique features, for the long-term wellbeing of people and society. A future where AI and humans can work together. We think that the mutual growth of humans and AI will lead to a more sustainable, better world.
We aim to strengthen the business use of language-based AI by expanding its basic functions such as lightness, reliability, ethics, and customizability. We will also work to blend language-based AI with the real world through the embodiment of AI. In other words, artificial intelligence that can actually make things happen in the world outside of computer algorithms.
Bigger isn't necessarily better; smarter is better.
We are not focused on getting improved performance by simply increasing the number of parameters. Instead, we aim to limit parameters to between 7--30 billion and enhance customizability, reliability, and AI ethics by providing additional learning and refining responses to fit in with what humans want and need.
By decreasing the parameter size compared to well-known generative AI systems, such as GPT-3 and GPT-4, we aim to reduce GPU requirements during both system learning and operation. We also plan to introduce the latest findings on model architecture and take advantage of our unique base of knowledge. It wouldn't be the first time for us to do this; our past successes include ultralight speech recognition for smartphones and power-saving, efficient media CPU processing technology.
The IOWN (Innovative Optical and Wireless Network) initiative will help with this process. Our network and information processing infrastructure features ultra-high capacity, ultra-low latency, and ultra-low power consumption. This can be beneficial for training and using large language models.
So, when is it happening?
Over the next year, we aim to launch language platform services based on large language models with an efficient user interface, strong security, and excellent reliability. After trialing internally within the NTT Group, we will roll out services to the public. For business customers, we will provide services such as AI-based digital transformation. For consumers, we plan to provide application services using large language models for smartphones and smartwatches.
The rapid development of generative AI has led many to worry about the spread of disinformation, the promotion of discrimination, information leaks, and misuse for cybercrime. We understand the importance of those issues and are engaged in research and development to address them, with a particular focus on how learning algorithms and model architectures can be created to prevent incorrect information output and inappropriate speech. We want our users to be secure and confident in the use of our large language model-based services.
Artificial intelligence is the present and the future. There are few technologies so vital to the future of society, with so much potential for good but which could cause harm if developed incorrectly. We intend to get it right the first time.
NTT--Innovating the Future of AI
Daniel O'Connor joined the NTT Group in 1999 when he began work as the Public Relations Manager of NTT Europe. While in London, he liaised with the local press, created the company's intranet site, wrote technical copy for industry magazines and managed exhibition stands from initial design to finished displays.
Later seconded to the headquarters of NTT Communications in Tokyo, he contributed to the company's first-ever winning of global telecoms awards and the digitalisation of internal company information exchange.
Since 2015 Daniel has created content for the Group's Global Leadership Institute, the One NTT Network and is currently working with NTT R&D teams to grow public understanding of the cutting-edge research undertaken by the NTT Group.