Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Chinese AI startup DeepSeek, known for challenging leading AI vendors with its innovative open-source technologies, today released a new ultra-large model: DeepSeek-V3.
Available via Hugging Face under the company’s license agreement, the new model comes with 671B parameters but uses a mixture-of-experts architecture to activate only select parameters, in order to handle given tasks accurately and efficiently. According to benchmarks shared by DeepSeek, the offering is already topping the charts, outperforming leading open-source models, including Meta’s Llama 3.1-405B, and closely matching the performance of closed models from Anthropic and OpenAI.
The release marks another major development closing the gap between closed and open-source AI. Ultimately, DeepSeek, which started as an offshoot of Chinese quantitative hedge fund High-Flyer Capital Management, hopes these developments will pave the way for artificial general intelligence (AGI), where models will have the ability to understand or learn any intellectual task that a human being can.
What does DeepSeek-V3 bring to the table?
Just like its predecessor DeepSeek-V2, the new ultra-large model uses the same basic architecture revolving around multi-head latent attention (MLA) and DeepSeekMoE. This approach ensures it maintains efficient training and inference — with specialized and shared “experts” (individual, smaller neural networks within the larger model) activating 37B parameters out of 671B for each token.
While the basic architecture ensures robust performance for DeepSeek-V3, the company has also debuted two innovations to further push the bar.
The first is an auxiliary loss-free load-balancing strategy. This dynamically monitors and adjusts the load on experts to utilize them in a balanced way without compromising overall model performance. The second is multi-token prediction (MTP), which allows the model to predict multiple future tokens simultaneously. This innovation not only enhances the training efficiency but enables the model to perform three times faster, generating 60 tokens per second.
“During pre-training, we trained DeepSeek-V3 on 14.8T high-quality and diverse tokens…Next, we conducted a two-stage context length extension for DeepSeek-V3,” the company wrote in a technical paper detailing the new model. “In the first stage, the maximum context length is extended to 32K, and in the second stage, it is further extended to 128K. Following this, we conducted post-training, including Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) on the base model of DeepSeek-V3, to align it with human preferences and further unlock its potential. During the post-training stage, we distill the reasoning capability from the DeepSeekR1 series of models, and meanwhile carefully maintain the balance between model accuracy and generation length.”
Notably, during the training phase, DeepSeek used multiple hardware and algorithmic optimizations, including the FP8 mixed precision training framework and the DualPipe algorithm for pipeline parallelism, to cut down on the costs of the process.
Overall, it claims to have completed DeepSeek-V3’s entire training in about 2788K H800 GPU hours, or about $5.57 million, assuming a rental price of $2 per GPU hour. This is much lower than the hundreds of millions of dollars usually spent on pre-training large language models.
Llama-3.1, for instance, is estimated to have been trained with an investment of over $500 million.
Strongest open-source model currently available
Despite the economical training, DeepSeek-V3 has emerged as the strongest open-source model in the market.
The company ran multiple benchmarks to compare the performance of the AI and noted that it convincingly outperforms leading open models, including Llama-3.1-405B and Qwen 2.5-72B. It even outperforms closed-source GPT-4o on most benchmarks, except English-focused SimpleQA and FRAMES — where the OpenAI model sat ahead with scores of 38.2 and 80.5 (vs 24.9 and 73.3), respectively.
Notably, DeepSeek-V3’s performance particularly stood out on the Chinese and math-centric benchmarks, scoring better than all counterparts. In the Math-500 test, it scored 90.2, with Qwen’s score of 80 the next best.
The only model that managed to challenge DeepSeek-V3 was Anthropic’s Claude 3.5 Sonnet, outperforming it with higher scores in MMLU-Pro, IF-Eval, GPQA-Diamond, SWE Verified and Aider-Edit.
The work shows that open-source is closing in on closed-source models, promising nearly equivalent performance across different tasks. The development of such systems is extremely good for the industry as it potentially eliminates the chances of one big AI player ruling the game. It also gives enterprises multiple options to choose from and work with while orchestrating their stacks.
Currently, the code for DeepSeek-V3 is available via GitHub under an MIT license, while the model is being provided under the company’s model license. Enterprises can also test out the new model via DeepSeek Chat, a ChatGPT-like platform, and access the API for commercial use. DeepSeek is providing the API at the same price as DeepSeek-V2 until February 8. After that, it will charge $0.27/million input tokens ($0.07/million tokens with cache hits) and $1.10/million output tokens.
Be the first to comment