ru24.pro
News in English
Июнь
2024

Nvidia Unveils Open Model That Creates LLM Training Data

0

As concerns grow that large language models (LLMs) are running out of high-quality training data, Nvidia has released Nemotron-4 340B, a family of open models designed to generate synthetic data for training LLMs across various industries. LLMs are artificial intelligence (AI) models that can understand and generate human-like text based on vast amounts of training […]

The post Nvidia Unveils Open Model That Creates LLM Training Data appeared first on PYMNTS.com.

As concerns grow that large language models (LLMs) are running out of high-quality training data, Nvidia has released Nemotron-4 340B, a family of open models designed to generate synthetic data for training LLMs across various industries.

LLMs are artificial intelligence (AI) models that can understand and generate human-like text based on vast amounts of training data. The scarcity of high-quality training data has become a significant challenge for organizations seeking to harness the power of LLMs. Nemotron-4 340B aims to address this issue by providing developers with a free and scalable way to generate synthetic data using base, instruct, and reward models, working together to create a pipeline that mimics real-world data characteristics.

Synthetic data refers to data that is artificially generated rather than collected from real-world sources. It is designed to closely resemble real data in terms of its characteristics and structure.

PYMNTS previously reported that industry analysts have warned that “the demand for high-quality data, essential for powering artificial intelligence (AI) conversational tools like OpenAI’s ChatGPT, may soon outstrip supply and potentially stall AI progress.” Jignesh Patel, a computer science professor at Carnegie Mellon University, highlighted the issue, saying, “Humanity can’t replenish that stock faster than LLM companies drain it.”

Optimized Integration With Nvidia Tools

Nvidia said it has optimized the Nemotron-4 340B models to integrate with its open-source tools, NeMo and TensorRT-LLM, facilitating efficient model training and deployment. NeMo is a toolkit for building and training neural networks, while TensorRT-LLM is a runtime for optimizing and deploying LLMs. Developers can access the models through Hugging Face, a popular platform for sharing AI models, and will soon be able to use them via a user-friendly microservice on Nvidia’s website.

The Nemotron-4 340B Reward model, which specializes in identifying high-quality responses, has already demonstrated its advanced capabilities by securing the top spot on the Hugging Face RewardBench leaderboard. RewardBench is a benchmark for evaluating the performance of reward models in identifying high-quality responses.

Customization and Fine-Tuning Options

Researchers also have the option to customize the Nemotron-4 340B Base model using their own data and the provided HelpSteer2 dataset, allowing them to create instruct or reward models tailored to their specific requirements. The Base model, trained on 9 trillion tokens, can be fine-tuned using the NeMo framework to adapt to various use cases and domains. Fine-tuning refers to the process of adjusting a pre-trained model’s parameters using a smaller dataset specific to a particular task or domain, enabling the model to perform better on that task.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

The post Nvidia Unveils Open Model That Creates LLM Training Data appeared first on PYMNTS.com.