Nations Embrace Sovereign AI: NVIDIA Launches New Microservices to Boost Regional AI Development
Nations worldwide are eagerly pursuing the development of sovereign AI systems, aiming to create artificial intelligence (AI) solutions using their own computing infrastructure, data, workforce, and business networks. This move ensures that AI systems are in alignment with local values, laws, and interests. In a significant step to support these efforts, NVIDIA recently announced the availability of four new NVIDIA NIM microservices. These microservices are designed to help developers build and deploy high-performing generative AI applications with greater ease.
Sovereign AI: A Global Trend
Sovereign AI refers to AI systems that are developed and maintained within a specific nation, using local resources and data. This approach is gaining traction globally as countries seek to maintain control over their AI technologies and ensure they reflect local cultural and legal standards. The new NVIDIA NIM microservices are tailored to support popular community models that address regional needs, enhancing user interactions through accurate understanding and improved responses based on local languages and cultural heritage.
Market Potential and Regional Models
The potential for growth in the generative AI software market is substantial. For instance, in the Asia-Pacific region alone, revenue is expected to skyrocket from $5 billion this year to $48 billion by 2030, according to ABI Research. NVIDIA’s new microservices include regional language models like Llama-3-Swallow-70B, trained on Japanese data, and Llama-3-Taiwan-70B, trained on Mandarin data. These models provide a deeper understanding of local laws, regulations, and cultural customs.
The RakutenAI 7B family of models, built on Mistral-7B, were trained on English and Japanese datasets and are available as two different NIM microservices for Chat and Instruct. Rakuten’s foundation and instruct models have achieved top scores among open Japanese large language models, as evidenced by their performance in the LM Evaluation Harness benchmark conducted from January to March 2024.
The Importance of Large Language Models (LLMs)
Large Language Models (LLMs) like those developed by NVIDIA are crucial for enhancing the effectiveness of AI outputs. By training these models on regional languages, they can communicate more accurately and with greater nuance, reflecting cultural and linguistic subtleties. These models offer leading performance in understanding Japanese and Mandarin languages, handling regional legal tasks, answering questions, and translating and summarizing languages compared to base LLMs like Llama 3.
Global Investment in Sovereign AI
Countries across the globe, including Singapore, the United Arab Emirates, South Korea, Sweden, France, Italy, and India, are heavily investing in sovereign AI infrastructure. The new NVIDIA NIM microservices empower businesses, government agencies, and universities to host native LLMs within their own environments, enabling developers to create advanced AI applications like copilots, chatbots, and AI assistants.
Developing Applications with Sovereign AI NIM Microservices
Developers can easily deploy sovereign AI models, packaged as NIM microservices, into production environments, achieving improved performance. These microservices, available with NVIDIA AI Enterprise, are optimized for inference with the NVIDIA TensorRT-LLM open-source library. The NIM microservices for Llama 3 70B—used as the base model for the new Llama–3-Swallow-70B and Llama-3-Taiwan-70B—can provide up to five times higher throughput. This improvement reduces the total cost of running the models in production and enhances user experiences by decreasing latency.
The new NIM microservices are available today as hosted application programming interfaces (APIs), making it easier for developers to integrate them into their applications.
Accelerating AI Deployments with NVIDIA NIM
NVIDIA’s NIM microservices accelerate AI deployments, enhance performance, and provide the necessary security for organizations across various industries, including healthcare, finance, manufacturing, education, and legal sectors. The Tokyo Institute of Technology fine-tuned Llama-3-Swallow 70B using Japanese-language data. Rio Yokota, a professor at the Global Scientific Information and Computing Center at the Tokyo Institute of Technology, emphasized the importance of developing sovereign AI models that adhere to cultural norms. He noted that these models interact with human culture and creativity, making it crucial to have AI systems that reflect local values.
For example, Preferred Networks, a Japanese AI company, has developed a healthcare-specific model called Llama3-Preferred-MedSwallow-70B. This model, trained on a unique corpus of Japanese medical data, has achieved top scores on the Japan National Examination for Physicians.
In Taiwan, Chang Gung Memorial Hospital (CGMH) is building a custom AI Inference Service (AIIS) to centralize all LLM applications within the hospital system. By using Llama 3-Taiwan 70B, the hospital is improving the efficiency of frontline medical staff with more nuanced medical language that patients can understand. Dr. Changfu Kuo, director of the Center for Artificial Intelligence in Medicine at CGMH, highlighted how AI applications built with local-language LLMs streamline workflows and serve as continuous learning tools to support staff development and improve patient care.
Industry Adoption and Collaboration
Several companies are adopting the new NVIDIA NIM microservices for their applications. Taiwan-based Pegatron, a maker of electronic devices, is integrating Llama 3-Taiwan 70B NIM microservice with its PEGAAi Agentic AI System to automate processes and boost efficiency in manufacturing and operations. Global petrochemical manufacturer Chang Chun Group, printed circuit board company Unimicron, technology-focused media company TechOrange, online contract service company LegalSign.ai, and generative AI startup APMIC are also leveraging these models for various applications.
Custom Enterprise Models with NVIDIA AI Foundry
While regional AI models provide culturally nuanced and localized responses, enterprises often need to fine-tune them for their specific business processes and domain expertise. NVIDIA AI Foundry offers a platform and service that includes popular foundation models, NVIDIA NeMo for fine-tuning, and dedicated capacity on NVIDIA DGX Cloud. This comprehensive solution allows developers to create customized foundation models packaged as NIM microservices.
Developers using NVIDIA AI Foundry also have access to the NVIDIA AI Enterprise software platform, which provides the security, stability, and support needed for production deployments. NVIDIA AI Foundry equips developers with the tools to quickly and easily build and deploy custom regional language NIM microservices, ensuring culturally and linguistically appropriate results for their users.
Conclusion
The launch of NVIDIA’s new NIM microservices marks a significant milestone in the development of sovereign AI systems. By providing tools that enable the creation and deployment of high-performing generative AI applications tailored to regional needs, NVIDIA is helping nations worldwide harness the power of AI while maintaining alignment with local values, laws, and interests. As the global market for generative AI continues to grow, these advancements will play a crucial role in shaping the future of AI technology across various industries.
For more Information, Refer to this article.