Microsoft ends support for Internet Explorer on June 16, 2022.
We recommend using one of the browsers listed below.

  • Microsoft Edge(Latest version) 
  • Mozilla Firefox(Latest version) 
  • Google Chrome(Latest version) 
  • Apple Safari(Latest version) 

Please contact your browser provider for download and installation instructions.

Open search panel Close search panel Open menu Close menu

March 27, 2026

Information

NTT Laboratories adopted 2 papers at EACL2026, a leading international conference in the field of natural language processing

Two papers authored by NTT laboratories accepted at EACL2026 (the 19th Conference of the European Chapter of the Association for Computational Linguistics) to be held in Rabat, Morocco from March 24 to 29, 2026. EACL is a leading international conference in the field of computational linguistics and natural language processing, where the latest research in language understanding, machine translation and information extraction is presented.

The following two papers were accepted.

Abbreviated names of the laboratories:
HI: NTT Human Informatics Laboratories
CS: NTT Communication Science Laboratories

■Let’s Put Ourselves in Sally’s Shoes: Shoes-of-Others Prefilling Improves Theory of Mind in Large Language Models

Kazutoshi Shinoda (HI), Nobukatsu Hojo (HI), Kyosuke Nishida (HI), Yoshihiro Yamazaki (HI), Keita Suzuki (HI), Hiroaki Sugiyama (HI), Kuniko Saito (HI)

We proposed a simple method to enhance large language models’ ability to infer others’ beliefs and emotions (Theory of Mind) without additional training. The method works by adding a short phrase such as “Let’s put ourselves in A’s shoes” at the beginning of the model’s output. Experiments on conversational and narrative tasks showed consistent improvements in understanding various mental states, demonstrating that this lightweight approach effectively promotes perspective-taking reasoning.

■Hacking Neural Evaluation Metrics with Single Hub Text

Hiroyuki Deguchi (CS), Katsuki Chousa (CS), Yusuke Sakai (Nara Institute of Science and Technology)

Reliability evaluation of translation quality is crucial for improving the quality of machine translation. We identify and analyze input cases that can trigger unexpected behavior in neural network-based evaluation metrics, which have recently become widely used. Through this study, we reveal potential vulnerabilities and limitations of existing evaluation metrics. These findings are expected to contribute to improving the reliability of evaluation metrics, addressing vulnerabilities, and developing robust metrics.

Information is current as of the date of issue of the individual topics.
Please be advised that information may be outdated after that point.