HOME > Business Wire > Article
NTT Scientists Present Breakthrough Research on AI Deep Learning at ICLR 2025
NTT Research and NTT R&D co-authored papers explore LLMs’ uncertain and open-ended nature, the “emergence” phenomenon, In-Context Learning and more
News Highlights:
- Nine papers presented at esteemed international conference by NTT Research and NTT R&D scientists on breakthroughs in the branch of AI called “deep learning.”
- Five papers co-authored by members of NTT Research’s new Physics of Artificial Intelligence (PAI) Group explore fundamental elements of AI learning, understanding and growth.
- The PAI Group, established in April 2025, aims to deepen understanding of AI mechanisms, observe the learning and prediction behaviors of AI and heal the breach of trust between AI and human operators.
SUNNYVALE, Calif. & TOKYO--( BUSINESS WIRE )-- NTT Research, Inc. and NTT R&D , divisions of NTT ( TYO:9432 ), announced that their scientists will present nine papers at the International Conference on Learning Representations (ICLR) 2025 , a top-tier machine learning conference dedicated to the advancement of representation learning, particularly deep learning. Five of the accepted presentations result from research co-authored by scientists within NTT Research’s recently announced Physics of Artificial Intelligence (PAI) Group led by Group Head Hidenori Tanaka.
Collectively, this research breaks new ground in understanding how AI models learn, grow and overcome uncertainty—all supporting NTT’s commitment to pioneering transformative, socially resilient, sustainable and responsible AI.
“The Physics of AI Group and its collaborators share the excitement for AI’s potential expressed by the public, the technology industry and the academic community,” said Tanaka. “As the research accepted at ICLR 2025 shows, however, important questions remain about how AI fundamentally learns and how generative AI fundamentally creates outputs. Neural networks play a vital role in the ‘deep learning’ of AI, and improving our understanding of them is vital to ultimately foster the development of sustainable, reliable and trustworthy AI technologies.”
One paper, “ Forking Paths in Neural Text Generation ,” addresses the issue of estimating uncertainty in Large Language Models (LLMs) for proper evaluation and user safety. Whereas prior approaches to uncertainty estimation focus on the final answer in generated text—ignoring potentially impactful intermediate steps—this research tested the hypothesis of the existence of key forking tokens, such that re-sampling the system at those specific tokens, but not others, leads to very different outcomes. The researchers discovered many examples of forking tokens, including punctuation marks, suggesting that LLMs are often just a single token away from generating a different output.
The paper was co-authored by Eric Bigelow1,2,3, Ari Holtzman4, Hidenori Tanaka2,3 and Tomer Ullman1,2.
Four other papers co-authored by members of the NTT Research PAI Group will be presented at the show, including:
- “ In-Context Learning of Representations :” Researchers explore the open-ended nature of LLMs (for example, their ability to in-context learn) and whether models alter these pretraining semantics to adopt alternative, context-specific ones. Findings indicate that scaling context size can flexibly re-organize model representations, possibly unlocking novel capabilities. Authors include: Core Francisco Park3,5,6, Andrew Lee7, Ekdeep Singh Lubana3,5, Yongyi Yang3,5,8, Maya Okawa3,5, Kento Nishi5,7, Martin Wattenberg7 and Hidenori Tanaka.
- “ Competition Dynamics Shape Algorithmic Phases of In-Context Learning: ” Researchers propose a synthetic sequence modeling task that involves learning to simulate a finite mixture of Markov chains. They argue that In-Context Learning (ICL) is best thought of as a mixture of different algorithms, each with its own peculiarities, instead of a monolithic capability, also implying that making general claims about ICL that hold universally across all settings may be infeasible. Authors include: Core Francisco Park, Ekdeep Singh Lubana, Itamar Pres9 and Hidenori Tanaka.
- “ Dynamics of Concept Learning and Compositional Generalization: ” Researchers propose an abstraction of prior work's compositional generalization problem by introducing a structured identity mapping (SIM) task, where a model is trained to learn the identity mapping on a Gaussian mixture with structurally organized centroids. Overall, the work establishes the SIM task as a meaningful theoretical abstraction of concept learning dynamics in modern generative models. Authors include: Yongyi Yang, Core Francisco Park, Ekdeep Singh Lubana, Maya Okawa, Wei Hu8 and Hidenori Tanaka.
- “ A Percolation Model of Emergence: Analyzing Transformers Trained on a Formal Language: ” Recognizing the need to establish the causal factors underlying the phenomenon of "emergence" in a neural network, researchers seek inspiration from the study of emergent properties in other fields and propose a phenomenological definition for the concept in the context of neural networks. Authors include: Ekdeep Singh Lubana, Kyogo Kawaguchi10,11,12, Robert P. Dick9 and Hidenori Tanaka.
In addition, four papers authored or co-authored by NTT R&D scientists based in Japan will be presented at the show, including:
- “ Test-time Adaptation for Regression by Subspace Alignment ” Authors include: Kazuki Adachi13,14, Shin’ya Yamaguchi13,15, Atsutoshi Kumagai13 and Tomoki Hamagami14 .
- “ Analysis of Linear Mode Connectivity via Permutation-Based Weight Matching: With Insights into Other Permutation Search Methods ” Authors include: Akira Ito16, Masanori Yamada16 and Atsutoshi Kumagai.
- “ Positive-Unlabeled Diffusion Models for Preventing Sensitive Data Generation ” Authors include: Hiroshi Takahashi13, Tomoharu Iwata13, Atsutoshi Kumagai, Yuuki Yamanaka13 and Tomoya Yamashita13.
- “ Wavelet-based Positional Representation for Long Context ” Authors include: Yui Oka13, Taku Hasegawa13, Kyosuke Nishida13, Kuniko Saito13.
ICLR 2025, the thirteenth International Conference on Learning Representations, is a globally esteemed conference on deep learning being held in Singapore April 24-28, 2025. Last year at ICLR 2024, NTT Research Physics & Informatics (PHI) Lab scientists co-authored two key papers: one on “analyzing in-context learning dynamics with random binary sequences, revealing sharp transitions in LLM behaviors” and another on “how fine-tuning affects model capabilities, showing minimal changes.”
The NTT Research Physics of Artificial Intelligence Group is dedicated to advancing our understanding of deep neural networks and the psychology of AI. Its three-pronged mission includes: 1) Deepening our understanding of the mechanisms of AI, all the better to integrate ethics from within, rather than through a patchwork of fine-tuning (i.e. enforced learning); 2) Borrowing from experimental physics, it will continue creating systematically controllable spaces of AI and observe the learning and prediction behaviors of AI step-by-step; 3) Healing the breach of trust between AI and human operators through improved operations and data control.
Formally established in April 2025 by members of the PHI Lab, the group began as a collaboration between the NTT Research and the Harvard University Center for Brain Science, having been formerly known as the Harvard University CBS-NTT Fellowship Program.
_________________________ |
|
1Harvard University, Department of Psychology |
2Harvard University, Center for Brain Science |
3NTT Research, Physics of Artificial Intelligence Group |
4University of Chicago, Department of Computer Science |
5CBS-NTT Program in Physics of Intelligence, Harvard University |
6Department of Physics, Harvard University |
7SEAS, Harvard University |
8CSE, University of Michigan, Ann Arbor |
9EECS Department, University of Michigan, Ann Arbor |
10Nonequilibrium Physics of Living Matter RIKEN Hakubi Research Team, RIKEN Center for Biosystems Dynamics Research |
11RIKEN Cluster for Pioneering Research |
12Institute for Physics of Intelligence, Department of Physics, The University of Tokyo |
13NTT Corporation |
14Yokohama National University |
15Kyoto University |
16NTT Social Informatics Laboratories |
17NTT Computer and Data Science Laboratories |
About NTT Research
NTT Research opened its offices in July 2019 in Silicon Valley to conduct basic research and advance technologies as a foundational model for developing high-impact innovation across NTT Group's global business. Currently, four groups are housed at NTT Research facilities in Sunnyvale: the Physics and Informatics (PHI) Lab, the Cryptography and Information Security (CIS) Lab, the Medical and Health Informatics (MEI) Lab, and the Physics of Artificial Intelligence (PAI) Group. The organization aims to advance science in four areas: 1) quantum information, neuroscience and photonics; 2) cryptographic and information security; 3) medical and health informatics; and 4) artificial intelligence. NTT Research is part of NTT, a global technology and business solutions provider with an annual R&D investment of thirty percent of its profits.
NTT and the NTT logo are registered trademarks or trademarks of NIPPON TELEGRAPH AND TELEPHONE CORPORATION and/or its affiliates. All other referenced product names are trademarks of their respective owners. ©2025 NIPPON TELEGRAPH AND TELEPHONE CORPORATION
View source version on businesswire.com: https://www.businesswire.com/news/home/20250424778713/en/
Contacts
NTT Research Contact:
Chris Shaw
Chief Marketing Officer
NTT Research
+1-312-888-5412
chris.shaw@ntt-research.com
Media Contact:
Nick Gibiser
Wireside Communications
®
For NTT Research
+1-804-500-6660
ngibiser@wireside.com
Source: NTT Research, Inc.
Business Wire
-
04/24 14:39 ispace Completes Success 6 of Mission 2 Milestones
-
04/24 14:00 U-Factor Announces “Excellence Award” for ALS Research by Partner ...
-
04/24 13:30 NTT Scientists Present Breakthrough Research on AI Deep Learning at IC...
-
04/24 13:00 HARMAN and KT altimedia Announce Strategic Partnership to Provide Inno...
-
04/24 09:57 Nidec Announces Financial Results for the Fiscal Year Ended March 31, ...
-
04/24 03:00 Mitsubishi Electric and Nanofiber Quantum Technologies Launch Trial to...
-
04/24 02:30 Palliser Capital Calls for Immediate Action in Keisei’s Forthcoming ...
-
04/24 02:00 Kioxia’s Portable SSD Wins Red Dot Design Award for Product Design 2...
-
04/24 01:30 Rakuten Mobile and Cloudflare Partner to Deliver Managed Security Serv...
-
04/24 00:00 Ono Pharma Announces an Oral Presentation on Phase 2 Data for Tirabrut...
-
04/24 00:00 Renesas Reports Financial Results for the First Quarter Ended March 31...
-
04/23 17:29 CORRECTING and REPLACING Saviynt Hires Identity Veteran Roger Hsu to A...
-
04/23 13:00 JetBlue Adds Redemption Benefits to Japan Airlines Partnership
-
04/23 12:30 Asahi Kasei, Nobian, Furuya Metal, and Mastermelt Commenced Joint Proj...
-
04/23 12:00 LogProstyle Inc. Celebrates Its IPO With the Ringing of the Closing Be...
-
04/23 11:05 IonQ Signs Historic Agreement with Toyota Tsusho Corporation to Advanc...
-
04/23 10:00 Aurion Biotech Appoints Edward J. Holland, M.D., as Chief Medical Offi...
-
04/22 22:00 Lone Star Funds Announces Purchase of 175-Room Hotel in Yokohama
-
04/22 16:09 AMTD Opens the World’s 1st L'OFFICIEL COFFEE
-
04/22 16:00 3Degrees Appoints Philippe Vedrenne as Chief Executive Officer, Effect...
-
04/22 16:00 OKI Group’s Greenhouse Gas Emission Reduction Targets Acquire SBT Ne...
-
04/22 16:00 AMTD Opens the World’s 1st L'OFFICIEL COFFEE
-
04/22 15:04 AIT Worldwide Logistics acquires Miami-based forwarder, GSDMIA, Inc.
-
04/22 14:50 Kuwait’s ‘Visionary Lighthouse’ Pavilion Illuminates Expo 2025 O...
-
04/22 13:17 SMSbiotech Establishes a Cooperative Research and Development Agreemen...
-
04/22 13:00 Strategic Capital: Re: Shareholder Proposal to NIPPON STEEL CORP. (TOK...
-
04/22 12:00 Renesas Debuts New Group in Popular RA0 Series with Best-in-Class Powe...
-
04/22 10:00 Netcracker Broadens Engagement With Israeli Telecommunications Provide...
-
04/22 07:00 A new weapon in the global fight against malaria
-
04/22 05:00 Kyocera Integrates HAPTIVITY® Technology into Sigma BF Mirrorless Cam...