Deliver Your News to the World

Rakuten Selected for Ministry of Economy, Trade and Industry and NEDO’s GENIAC Project to Bolster GenAI Development

- Next-gen Japanese Large Language Model R&D to begin in August, with aim to realize highly personalized AI agents


Tokyo – WEBWIRE

Rakuten Group, Inc. announced it has been selected for the third term of the Generative AI Accelerator Challenge (GENIAC) project promoted by the Ministry of Economy, Trade and Industry (METI) and the New Energy and Industrial Technology Development Organization (NEDO) with the aim of strengthening Japan’s generative AI development*1.

The GENIAC project primarily provides support for computing resources necessary for generative AI development. It also leverages exchange on the latest technology and developer community trends to facilitate knowledge sharing. Research and development support has been provided in the first term since February 2024 and in the second term since October of the same year. Rakuten’s project was selected through NEDO’s third term application period launched in March 2025.

Rakuten has been actively developing and releasing Japanese language-optimized AI models to the open-source community since March 2024*2. From the outset, the company has emphasized cost efficiency. Rakuten focuses on smaller, highly efficient models, such as Rakuten AI 2.0, which leverages a Mixture of Experts (MoE) architecture*3 that allows it to process queries by activating only relevant subsets of experts, significantly reducing operational costs compared to traditional dense models.

As a selected project participant, Rakuten will begin research and development in August 2025 on a cutting-edge, open-weight AI foundation model. This model will integrate new techniques that significantly expand the language model’s memory capabilities – effectively increasing the information it can access when generating responses. Rakuten is leveraging these techniques to overcome current limitations in generative AI memory, developing a model that offers vastly improved recall and performance.

Looking ahead, Rakuten is pioneering the next chapter of its AI journey, focusing on personalization and memory for users. This involves enabling LLMs to "remember" past interactions and conversations, moving beyond mere understanding to proactively suggest and nurture long-term user relationships – a significant advancement over the limitations of current transformer architectures that struggle with extended context windows.

Furthermore, Rakuten aims to significantly improve efficiency through better training and inference algorithms, unlocking new possibilities for personalized AI. Through these technological advancements, the company aims to expand the application of AI agents to a range of services across the Rakuten Ecosystem, improving customer experience and streamlining operations.

Yu Hirate, Vice General Manager at Rakuten Group’s AI Research Supervisory Department and Rakuten Institute of Technology Worldwide, commented, "I am very pleased to be able to work on the development of a cutting-edge generative AI foundation model with the support of NEDO and the Ministry of Economy, Trade and Industry. Through this cost-effective AI model, we hope to contribute to the realization of AI agents that are best optimized for the Japanese language and are highly personalized, as well as empower local businesses and boost the economy"

Rakuten leverages its AI transformation, or "AI-nization" initiative, to promote the use of AI in all aspects of business to achieve further growth. Going forward, Rakuten will continue to leverage its rich data, ubiquitous channels and growth flywheel to create new value for people both in Japan and worldwide.

Notes
*1 GENIAC selection results: https://www.nedo.go.jp/koubo/CD3_100397.html (*Japanese page)
*2 Related press releases
Rakuten AI 2.0 Large Language Model and Small Language Model Optimized for Japanese Now Available (February 12, 2025)
Rakuten Unveils New AI Models Optimized for Japanese (December 18, 2024)
Rakuten Releases High-Performance Open Large Language Models Optimized for the Japanese Language (March 21, 2024)
*3 The Mixture of Experts model architecture is an AI model architecture where the model is divided into multiple sub models, known as experts. During inference and training, only a subset of the experts is activated and used to process the input.

*Please note that the information contained in press releases is current as of the date of release.


( Press Release Image: https://photos.webwire.com/prmedia/7/341191/341191-1.png )


WebWireID341191





This news content was configured by WebWire editorial staff. Linking is permitted.

News Release Distribution and Press Release Distribution Services Provided by WebWire.