From e000969882a119a05ad4b9adce0c5f01fc05b924 Mon Sep 17 00:00:00 2001 From: kylekruttschni Date: Sat, 15 Feb 2025 14:18:43 +0000 Subject: [PATCH] Add 'DeepSeek Open-Sources DeepSeek-R1 LLM with Performance Comparable To OpenAI's O1 Model' --- ...R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md new file mode 100644 index 0000000..a0f83dd --- /dev/null +++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md @@ -0,0 +1,2 @@ +
DeepSeek open-sourced DeepSeek-R1, an LLM fine-tuned with reinforcement [knowing](https://git.bugwc.com) (RL) to improve reasoning ability. DeepSeek-R1 attains outcomes on par with OpenAI's o1 design on a number of standards, consisting of MATH-500 and [SWE-bench](https://www.themart.co.kr).
+
DeepSeek-R1 is based on DeepSeek-V3, a [mixture](https://natgeophoto.com) of [experts](https://itheadhunter.vn) (MoE) model recently open-sourced by DeepSeek. This base model is fine-tuned using Group Relative Policy Optimization (GRPO), a [reasoning-oriented variant](https://han2.kr) of RL. The research study group likewise [carried](https://www.openstreetmap.org) out understanding distillation from DeepSeek-R1 to [open-source Qwen](https://say.la) and Llama models and launched numerous of each \ No newline at end of file