HOME > Business Wire > Article


April 24, 2025 13:30 UTC

NTT Scientists Present Breakthrough Research on AI Deep Learning at ICLR 2025

 NTT Research and NTT R&D co-authored papers explore LLMs’ uncertain and open-ended nature, the “emergence” phenomenon, In-Context Learning and more

News Highlights:

  • Nine papers presented at esteemed international conference by NTT Research and NTT R&D scientists on breakthroughs in the branch of AI called “deep learning.”
  • Five papers co-authored by members of NTT Research’s new Physics of Artificial Intelligence (PAI) Group explore fundamental elements of AI learning, understanding and growth.
  • The PAI Group, established in April 2025, aims to deepen understanding of AI mechanisms, observe the learning and prediction behaviors of AI and heal the breach of trust between AI and human operators.

SUNNYVALE, Calif. & TOKYO--( BUSINESS WIRE )-- NTT Research, Inc. and NTT R&D , divisions of NTT ( TYO:9432 ), announced that their scientists will present nine papers at the International Conference on Learning Representations (ICLR) 2025 , a top-tier machine learning conference dedicated to the advancement of representation learning, particularly deep learning. Five of the accepted presentations result from research co-authored by scientists within NTT Research’s recently announced Physics of Artificial Intelligence (PAI) Group led by Group Head Hidenori Tanaka.

Collectively, this research breaks new ground in understanding how AI models learn, grow and overcome uncertainty—all supporting NTT’s commitment to pioneering transformative, socially resilient, sustainable and responsible AI.

“The Physics of AI Group and its collaborators share the excitement for AI’s potential expressed by the public, the technology industry and the academic community,” said Tanaka. “As the research accepted at ICLR 2025 shows, however, important questions remain about how AI fundamentally learns and how generative AI fundamentally creates outputs. Neural networks play a vital role in the ‘deep learning’ of AI, and improving our understanding of them is vital to ultimately foster the development of sustainable, reliable and trustworthy AI technologies.”

One paper, “ Forking Paths in Neural Text Generation ,” addresses the issue of estimating uncertainty in Large Language Models (LLMs) for proper evaluation and user safety. Whereas prior approaches to uncertainty estimation focus on the final answer in generated text—ignoring potentially impactful intermediate steps—this research tested the hypothesis of the existence of key forking tokens, such that re-sampling the system at those specific tokens, but not others, leads to very different outcomes. The researchers discovered many examples of forking tokens, including punctuation marks, suggesting that LLMs are often just a single token away from generating a different output.

The paper was co-authored by Eric Bigelow1,2,3, Ari Holtzman4, Hidenori Tanaka2,3 and Tomer Ullman1,2.

Four other papers co-authored by members of the NTT Research PAI Group will be presented at the show, including:

  • In-Context Learning of Representations :” Researchers explore the open-ended nature of LLMs (for example, their ability to in-context learn) and whether models alter these pretraining semantics to adopt alternative, context-specific ones. Findings indicate that scaling context size can flexibly re-organize model representations, possibly unlocking novel capabilities. Authors include: Core Francisco Park3,5,6, Andrew Lee7, Ekdeep Singh Lubana3,5, Yongyi Yang3,5,8, Maya Okawa3,5, Kento Nishi5,7, Martin Wattenberg7 and Hidenori Tanaka.
  • Competition Dynamics Shape Algorithmic Phases of In-Context Learning: ” Researchers propose a synthetic sequence modeling task that involves learning to simulate a finite mixture of Markov chains. They argue that In-Context Learning (ICL) is best thought of as a mixture of different algorithms, each with its own peculiarities, instead of a monolithic capability, also implying that making general claims about ICL that hold universally across all settings may be infeasible. Authors include: Core Francisco Park, Ekdeep Singh Lubana, Itamar Pres9 and Hidenori Tanaka.
  • Dynamics of Concept Learning and Compositional Generalization: ” Researchers propose an abstraction of prior work's compositional generalization problem by introducing a structured identity mapping (SIM) task, where a model is trained to learn the identity mapping on a Gaussian mixture with structurally organized centroids. Overall, the work establishes the SIM task as a meaningful theoretical abstraction of concept learning dynamics in modern generative models. Authors include: Yongyi Yang, Core Francisco Park, Ekdeep Singh Lubana, Maya Okawa, Wei Hu8 and Hidenori Tanaka.
  • A Percolation Model of Emergence: Analyzing Transformers Trained on a Formal Language: ” Recognizing the need to establish the causal factors underlying the phenomenon of "emergence" in a neural network, researchers seek inspiration from the study of emergent properties in other fields and propose a phenomenological definition for the concept in the context of neural networks. Authors include: Ekdeep Singh Lubana, Kyogo Kawaguchi10,11,12, Robert P. Dick9 and Hidenori Tanaka.

In addition, four papers authored or co-authored by NTT R&D scientists based in Japan will be presented at the show, including:

ICLR 2025, the thirteenth International Conference on Learning Representations, is a globally esteemed conference on deep learning being held in Singapore April 24-28, 2025. Last year at ICLR 2024, NTT Research Physics & Informatics (PHI) Lab scientists co-authored two key papers: one on “analyzing in-context learning dynamics with random binary sequences, revealing sharp transitions in LLM behaviors” and another on “how fine-tuning affects model capabilities, showing minimal changes.”

The NTT Research Physics of Artificial Intelligence Group is dedicated to advancing our understanding of deep neural networks and the psychology of AI. Its three-pronged mission includes: 1) Deepening our understanding of the mechanisms of AI, all the better to integrate ethics from within, rather than through a patchwork of fine-tuning (i.e. enforced learning); 2) Borrowing from experimental physics, it will continue creating systematically controllable spaces of AI and observe the learning and prediction behaviors of AI step-by-step; 3) Healing the breach of trust between AI and human operators through improved operations and data control.

Formally established in April 2025 by members of the PHI Lab, the group began as a collaboration between the NTT Research and the Harvard University Center for Brain Science, having been formerly known as the Harvard University CBS-NTT Fellowship Program.

_________________________

 

1Harvard University, Department of Psychology

2Harvard University, Center for Brain Science

3NTT Research, Physics of Artificial Intelligence Group

4University of Chicago, Department of Computer Science

5CBS-NTT Program in Physics of Intelligence, Harvard University

6Department of Physics, Harvard University

7SEAS, Harvard University

8CSE, University of Michigan, Ann Arbor

9EECS Department, University of Michigan, Ann Arbor

10Nonequilibrium Physics of Living Matter RIKEN Hakubi Research Team, RIKEN Center for Biosystems Dynamics Research

11RIKEN Cluster for Pioneering Research

12Institute for Physics of Intelligence, Department of Physics, The University of Tokyo

13NTT Corporation

14Yokohama National University

15Kyoto University

16NTT Social Informatics Laboratories

17NTT Computer and Data Science Laboratories

About NTT Research

NTT Research opened its offices in July 2019 in Silicon Valley to conduct basic research and advance technologies as a foundational model for developing high-impact innovation across NTT Group's global business. Currently, four groups are housed at NTT Research facilities in Sunnyvale: the Physics and Informatics (PHI) Lab, the Cryptography and Information Security (CIS) Lab, the Medical and Health Informatics (MEI) Lab, and the Physics of Artificial Intelligence (PAI) Group. The organization aims to advance science in four areas: 1) quantum information, neuroscience and photonics; 2) cryptographic and information security; 3) medical and health informatics; and 4) artificial intelligence. NTT Research is part of NTT, a global technology and business solutions provider with an annual R&D investment of thirty percent of its profits.

NTT and the NTT logo are registered trademarks or trademarks of NIPPON TELEGRAPH AND TELEPHONE CORPORATION and/or its affiliates. All other referenced product names are trademarks of their respective owners. ©2025 NIPPON TELEGRAPH AND TELEPHONE CORPORATION

Contacts

NTT Research Contact:
Chris Shaw
Chief Marketing Officer
NTT Research
+1-312-888-5412
chris.shaw@ntt-research.com

Media Contact:
Nick Gibiser
Wireside Communications ®
For NTT Research
+1-804-500-6660
ngibiser@wireside.com


Source: NTT Research, Inc.

View this news release online at:
https://www.businesswire.com/news/home/20250424778713/en


The content of this press release is provided by Business Wire. If you
have any questions regarding the content, please contact the person(s)
identified in the CONTACTS box at the bottom of the release.

Business Wire

HEADLINES

POLITICS
Komeito Decides to Endorse LDP Upper House Candidates Involved in Funds Scandal
ECONOMY
Narita Airport Passengers Retake 40 M. in FY 2024 amid Inbound Tourism Boom
SPORTS
Boxing: Naoya Inoue Shows Workout before May 4 Title Defense Fight in Las Vegas
OTHER
All 4 Giant Pandas at Adventure World in Wakayama Pref. to Be Returned to China

AFP-JIJI PRESS NEWS JOURNAL


Photos