Microsoft ends support for Internet Explorer on June 16, 2022.
We recommend using one of the browsers listed below.

  • Microsoft Edge(Latest version) 
  • Mozilla Firefox(Latest version) 
  • Google Chrome(Latest version) 
  • Apple Safari(Latest version) 

Please contact your browser provider for download and installation instructions.

Open search panel Close search panel Open menu Close menu

December 2, 2025

Information

Three papers from NTT Laboratories have been accepted for publication for NeurIPS 2025

Three papers authored by NTT laboratories have been accepted at NeurIPS 2025 (the 39th Annual Conference on Neural Information Processing Systems), to be held in San Diego, USA, from November 30 to December 7, 2025. NeurIPS is known as one of the most prestigious international conferences in the field of neural network and artificial intelligence, with an acceptance rate of 24.52% (21,575 papers submitted).

Abbreviated names of the laboratories:
HI: NTT Human Informatics Laboratories
CD: NTT Computer and Data Science Laboratories
CS: NTT Communication Science Laboratories

  1. Gaussian Processes for Shuffled Regression
    1. Masahiro Kohjima(HI)
    2. Shuffled regression is the problem of learning regression functions from shuffled data where the correspondence between the input features and target response is unknown. This paper proposes a probabilistic model for shuffled regression called Gaussian Process Shuffled Regression (GPSR). By introducing Gaussian processes as a prior of regression functions in function space via the kernel function, GPSR can express a wide variety of functions in a nonparametric manner while quantifying the uncertainty of the prediction.
  2. Enhancing Visual Prompting Through Expanded Transformation Space and Overfitting Mitigation
    1. Shohei Enomoto (CD)
    2. In this study, we achieved an improvement in visual prompting, a parameter-efficient fine-tuning method that adapts pre-trained recognition models to new tasks. Conventional methods suffered from two limitations, resulting in lower accuracy compared to full fine-tuning: limited expressive power from simple additive transformations and overfitting caused by increased parameters. To address these limitations, we propose a method that combines affine and color transformations—representative, computationally efficient image transformation techniques—with additive transformations to enhance expressive power, while effectively mitigating overfitting through data augmentation techniques. Experimental results show that our method achieves significantly higher performance than conventional methods, demonstrates improved robustness against distribution shifts, and exhibits enhanced transferability to unseen models during training. This work contributes to adapting large-scale AI models to practical environments with minimal computational resources.
  3. Revisiting 1-peer exponential graph for enhancing decentralized learning efficiency
    1. Kenta Niwa (CS), Yuki Takezawa (Kyoto University / OIST), Guoqiang Zhang (University of Exeter), W. Bastiaan Kleijn (Victoria University of Wellington)
    2. Decentralized learning, where a single AI model is trained using many machines in data centers, is an important topic in machine learning. In this work, we present novel communication patterns for decentralized learning that let machines flexibly change their peers while keeping communication balanced across all machines. We show that, after a limited number of rounds of information exchange, the model trained by our method converges to the average model as if it had been trained on all data gathered in one place. Experiments confirm that our approach enables faster and more accurate training of large AI models under limited communication. This technique provides a fundamental building block for efficiently training large-scale AI models with many computational nodes under limited communication rounds.

Information is current as of the date of issue of the individual topics.
Please be advised that information may be outdated after that point.