Microsoft ends support for Internet Explorer on June 16, 2022.
We recommend using one of the browsers listed below.

  • Microsoft Edge(Latest version) 
  • Mozilla Firefox(Latest version) 
  • Google Chrome(Latest version) 
  • Apple Safari(Latest version) 

Please contact your browser provider for download and installation instructions.

Open search panel Close search panel Open menu Close menu

December 5, 2024

Information

NTT's three papers accepted in ACML2024

NTT Corporation announced that its researchers authored three papers and have been accepted by Asian Conference on Machine Learning(ACML2024), one of the leading conferences in machine learning. ACML2024 will be held in Hanoi, Vietnam from December 5 to 8, 2024. The accepted papers which NTT's researchers authored are as follows:

  1. One-Shot Machine Unlearning with Mnemonic Code
    1. Tomoya Yamashita (NTT Social Informatics Laboratories), Masanori Yamada (NTT Social Informatics Laboratories), Takeshi Shibata (NTT Communication Science Laboratories *)
    2. * Currently affiliated with NEC
    3. To address privacy and ethical issues, machine unlearning, a technique for removing undesirable data from learned models, is attracting attention. Existing machine unlearning methods have the problem of high computational costs. In this paper, we propose a method for fast forgetting that uses random noise called mnemonic codes. Through experiments, we have shown that it can be applied to large-scale models such as Vision Transformer, which was difficult to apply existing methods.
  2. Toward Data Efficient Model Merging between Different Datasets without Performance Degradation
    1. Masanori Yamada (NTT Social Informatics Laboratories), Tomoya Yamashita (NTT Social Informatics Laboratories), Shinya Yamaguchi (NTT Computer & Data Science Laboratories), Daiki Chijiwa (NTT Computer & Data Science Laboratories)
    2. Model merging has gained attention as a technique for creating new models by combining multiple models into a single model. Current model merging methods can only be applied between fine-tuned models derived from the same pre-trained model. In this paper, we propose a model merging method that extends beyond fine-tuned models by taking advantage of the symmetry of neural networks.
  3. Analyzing Diffusion Models on Synthesizing Training Datasets
    1. Shin'ya Yamaguchi (NTT Computer & Data Science Laboratories)
    2. Diffusion model, a type of generative model, can learn the distribution of a dataset and synthesize high-quality samples, so it is expected that the synthetic samples can be used as a dataset for training another model. However, it was unknown to what extent this synthetic dataset reproduces the real dataset in terms of the performance of the training model. In this paper, through experimental analysis, we discovered that the sample efficiency as a training dataset is inferior to the real dataset, and that this is due to the reverse-diffusion process of the diffusion model.

NTT believes in resolving social issues through our business operations by applying technology for good. An innovative spirit has been part of our culture for over 150 years, making breakthroughs that enable a more naturally connected and sustainable world. NTT Research and Development shares insights, innovations and knowledge with NTT operating companies and partners to support new ideas and solutions. Around the world, our research laboratories focus on artificial intelligence, photonic networks, theoretical quantum physics, cryptography, health and medical informatics, smart data platforms and digital twin computing. As a top-five global technology and business solutions provider, our diverse teams deliver services to over 190 countries and regions. We serve over 75% of Fortune Global 100 companies and thousands of other clients and communities worldwide. For more information on NTT, visit https://www.rd.ntt/e/Open other window

Information is current as of the date of issue of the individual topics.
Please be advised that information may be outdated after that point.