Home / AI / Ollama / DeepSeek R1 for Coding on Ollama

DeepSeek R1 for Coding on Ollama

DeepSeek R1 is one of the best local models for coding tasks. Its chain-of-thought reasoning means it actually thinks through the problem before writing code — catching logic errors that standard models miss. Here’s how to get the most out of it for software development.

Why DeepSeek R1 Is Strong at Coding

Standard LLMs generate code by pattern-matching from their training data. DeepSeek R1’s reasoning approach is different — it explicitly works through the problem in <think> tags before producing code. This means it:

  • Plans the algorithm before writing it
  • Considers edge cases during reasoning
  • Catches logical errors in its own proposed solution before outputting it
  • Explains its approach naturally as part of the output

On HumanEval (the standard coding benchmark), DeepSeek R1 7B scores around 82% — ahead of Llama 3.1 8B (~72%) and significantly ahead of Mistral 7B (~30%).

Setup

ollama pull deepseek-r1
ollama run deepseek-r1

For serious coding work, the 14B model is worth the extra RAM if your hardware supports it. See which model size to use.

Code Generation

DeepSeek R1 works best when you give it precise, well-scoped prompts:

Write a Python function that:
- Takes a list of dictionaries, each with 'name' and 'score' keys
- Returns the top N items sorted by score (descending)
- Handles ties by sorting alphabetically by name
- Raises ValueError if N is greater than the list length

Include type hints and a docstring.

The model will reason through the sorting logic, edge cases, and error handling before producing the code — resulting in more correct output than you’d typically get from a first attempt with other models.

Debugging

Paste broken code with a clear description of the problem:

This function is supposed to find all prime numbers up to n using the 
Sieve of Eratosthenes, but it's returning incorrect results for n=10.
Find and fix the bug:

def sieve(n):
    primes = [True] * n
    p = 2
    while p * p <= n:
        if primes[p]:
            for i in range(p * p, n, p):
                primes[i] = False
        p += 1
    return [p for p in range(2, n) if primes[p]]

R1's reasoning phase will trace through the logic, identify the off-by-one error (the range should be n+1), and explain why before presenting the fix.

Code Review

Review this Python function for:
1. Security issues
2. Performance problems  
3. Edge cases not handled
4. Style and readability

[paste your code here]

Algorithm Design

This is where R1's reasoning shines most. For complex algorithmic problems, it will consider multiple approaches before committing:

I need to find the longest common subsequence of two strings efficiently.
The strings can be up to 10,000 characters long.
Explain the approach and implement it in Python with O(n*m) time complexity.

Stripping the Think Tags in Your App

When using DeepSeek R1 via the Python API, you may want to strip the reasoning and show only the final code:

import re
import ollama

def ask_r1(prompt):
    response = ollama.chat(
        model='deepseek-r1',
        messages=[{'role': 'user', 'content': prompt}]
    )
    content = response['message']['content']
    # Remove thinking block
    clean = re.sub(r'<think>.*?</think>', '', content, flags=re.DOTALL).strip()
    return clean

code = ask_r1("Write a Python function to validate an email address using regex.")
print(code)

Using DeepSeek R1 in VS Code

You can pair DeepSeek R1 with the Continue extension in VS Code for local AI code assistance. See the full guide: How to use Ollama with VS Code.

DeepSeek R1 vs Other Coding Models

Model HumanEval Best for
deepseek-r1:7b ~82% Reasoning through complex problems
deepseek-coder-v2 ~85% Pure code generation, faster
qwen2.5-coder:7b ~88% Code completion, autocomplete
llama3.1:8b ~72% General tasks with some coding

For pure code generation speed, deepseek-coder-v2 or qwen2.5-coder are faster. For problems requiring logical reasoning — algorithms, debugging, architecture decisions — DeepSeek R1 produces better results. See the full best Ollama models for coding comparison.

Getting Started

See the main setup guide: How to run DeepSeek R1 on Ollama.

Sign Up For Daily Newsletter

Stay updated with our weekly newsletter. Subscribe now to never miss an update!

[mc4wp_form]

Leave a Reply

Your email address will not be published. Required fields are marked *