DeepSeek R1 is available in six sizes on Ollama — from a compact 1.5B model that runs on almost any machine, to a 70B version that rivals the full model’s capabilities. Choosing the right size depends on your hardware, how fast you need responses, and what you’re using it for.
Overview of Available Sizes
| Model | Parameters | Download size | Min RAM/VRAM | Base model |
|---|---|---|---|---|
| deepseek-r1:1.5b | 1.5B | ~1.1GB | 4GB RAM | Qwen 2.5 |
| deepseek-r1:7b | 7B | ~4.7GB | 8GB RAM | Qwen 2.5 |
| deepseek-r1:8b | 8B | ~5.2GB | 8GB RAM | Llama 3.1 |
| deepseek-r1:14b | 14B | ~9GB | 16GB RAM | Qwen 2.5 |
| deepseek-r1:32b | 32B | ~20GB | 24GB RAM/VRAM | Qwen 2.5 |
| deepseek-r1:70b | 70B | ~43GB | 48GB RAM/VRAM | Llama 3.3 |
These are distilled versions of the full 671B DeepSeek R1 model, trained to transfer its reasoning capabilities into smaller architectures.
Which Size Should You Use?
deepseek-r1:1.5b — For Low-Spec Hardware
The 1.5B model runs on almost anything — even a machine with 4GB of RAM and no GPU. It’s fast but the reasoning quality is noticeably weaker than larger variants. Good for: quick experiments, testing your Ollama setup, Raspberry Pi deployments where speed matters more than quality.
ollama pull deepseek-r1:1.5b
deepseek-r1:7b — The Sweet Spot for Most Users
The 7B is the default when you run ollama pull deepseek-r1. It fits in 8GB of RAM or VRAM, runs at a reasonable speed on CPU, and the reasoning quality is genuinely impressive for its size. This is the right starting point for most people.
ollama pull deepseek-r1:7b
# or simply:
ollama pull deepseek-r1
deepseek-r1:8b — Llama-Based Distil
The 8B version is distilled from Llama 3.1 rather than Qwen 2.5, making it a slightly different flavour. It performs similarly to the 7B but may handle certain instruction styles better due to Llama’s training. Worth trying if you’re already familiar with Llama 3.1’s behaviour.
ollama pull deepseek-r1:8b
deepseek-r1:14b — Noticeably Stronger Reasoning
The jump from 7B to 14B is where reasoning quality takes a meaningful step up. Multi-step maths problems, complex coding tasks, and logical puzzles all become significantly more reliable. Requires 16GB RAM or a 12GB+ VRAM GPU (e.g. RTX 3080, RTX 4070).
ollama pull deepseek-r1:14b
deepseek-r1:32b — High-End Consumer GPU Territory
The 32B model approaches the quality of much larger models on reasoning tasks. It needs a 24GB VRAM GPU (RTX 3090, RTX 4090, or similar) to run fully on GPU. On CPU with 32GB+ RAM it will run but slowly. For serious use cases where accuracy matters more than speed.
ollama pull deepseek-r1:32b
deepseek-r1:70b — Near Full-Model Quality
The 70B distil (based on Llama 3.3) is as close as consumer hardware gets to the full 671B model. You need a workstation-class GPU setup — dual RTX 4090s, an A100, or a machine with 64GB+ of unified memory (Apple M2/M3 Ultra). Inference is slow on CPU but the quality is exceptional.
ollama pull deepseek-r1:70b
Decision Guide by Hardware
| Your hardware | Recommended model |
|---|---|
| 4-8GB RAM, no GPU | deepseek-r1:1.5b |
| 8-16GB RAM, no GPU | deepseek-r1:7b |
| GPU with 6-8GB VRAM (RTX 3060, 4060) | deepseek-r1:7b |
| GPU with 12-16GB VRAM (RTX 3080, 4070 Ti) | deepseek-r1:14b |
| GPU with 24GB VRAM (RTX 3090, 4090) | deepseek-r1:32b |
| Apple M2/M3 Pro (18-36GB unified) | deepseek-r1:14b or 32b |
| Apple M2/M3 Ultra (64GB+ unified) | deepseek-r1:70b |
| Raspberry Pi 5 (8GB) | deepseek-r1:1.5b |
Does Size Always Mean Better?
For DeepSeek R1 specifically, size matters more than it does for standard chat models because the reasoning capability scales strongly with parameter count. That said, the 7B is genuinely useful for everyday tasks — the jump to 14B is where you’ll notice a real difference for hard problems.
If you’re using DeepSeek R1 primarily for coding or maths, go as large as your hardware comfortably allows. For general chat and summarisation, the 7B is sufficient.
Checking Available Models
# List all pulled models
ollama list
# Remove a model to free up disk space
ollama rm deepseek-r1:7b
Ready to get started? See the full guide to running DeepSeek R1 on Ollama.


