Home / AI / Ollama / How to Install Ollama on Linux: Ubuntu, Debian and Fedora Guide

How to Install Ollama on Linux: Ubuntu, Debian and Fedora Guide

Ollama

Ollama makes running large language models locally remarkably straightforward, and Linux is its natural home. Whether you’re on a Ubuntu desktop, a headless Debian server, or a Fedora workstation with an NVIDIA or AMD GPU, Ollama installs in seconds and runs as a proper system service. This guide covers every aspect of getting Ollama running on Linux — from the one-line install to GPU acceleration, network exposure, and troubleshooting common issues across Ubuntu, Debian, and Fedora/RHEL systems.

The One-Line Install (Works on Most Linux Distros)

Ollama provides an official install script that handles everything automatically — package installation, user creation, and systemd service setup. On any modern Linux distribution with curl installed, run:

curl -fsSL https://ollama.com/install.sh | sh

The script detects your distribution, downloads the appropriate binaries, installs them to /usr/local/bin, creates a dedicated ollama system user and group, and registers a systemd service. It also performs basic GPU detection — if you have an NVIDIA GPU with compatible drivers already installed, it will configure Ollama to use it automatically.

Once the script completes, verify the installation:

ollama --version

You should see output similar to ollama version 0.x.x. If you see a “command not found” error, log out and back in to pick up the updated PATH.

Running Ollama as a systemd Service

The install script sets up Ollama as a systemd service automatically, but it’s worth understanding how to manage it manually — especially on servers where you need reliable auto-start on boot.

Enable and Start the Service

sudo systemctl enable ollama
sudo systemctl start ollama

Check Service Status

sudo systemctl status ollama

A healthy service shows Active: active (running).

View Logs

journalctl -u ollama

For live log output:

journalctl -u ollama -f

Stop or Restart

sudo systemctl stop ollama
sudo systemctl restart ollama

Ubuntu-Specific Notes (22.04 and 24.04)

The one-line install script works seamlessly on both Ubuntu 22.04 LTS and Ubuntu 24.04 LTS. If curl is missing:

sudo apt update && sudo apt install -y curl

Ubuntu 22.04 users with NVIDIA GPUs should ensure they’re running driver version 525 or later for full CUDA 12 compatibility. Ubuntu’s AppArmor security framework can occasionally interfere with Ollama’s GPU access. If you encounter permission errors, check AppArmor status with sudo aa-status and review any denial messages in journalctl.

Debian-Specific Notes

Ollama installs cleanly on Debian 11 (Bullseye) and Debian 12 (Bookworm). On minimal Debian server installations, install curl first:

sudo apt update && sudo apt install -y curl

Debian’s conservative package cycle means NVIDIA drivers in the official repos may lag behind. For GPU acceleration, use backports:

sudo apt install -t bookworm-backports nvidia-driver

Fedora and RHEL-Specific Notes

Fedora and RHEL-based distributions (Rocky Linux, AlmaLinux) are fully supported. If curl is missing:

sudo dnf install -y curl

Fedora uses SELinux in enforcing mode by default, which can occasionally block Ollama. If the service fails, check SELinux audit logs:

sudo ausearch -c ollama --raw | audit2why

NVIDIA drivers on Fedora are best installed via RPM Fusion:

sudo dnf install -y akmod-nvidia xorg-x11-drv-nvidia-cuda

NVIDIA GPU Setup on Linux

Ollama delivers dramatically better performance with NVIDIA GPU acceleration. The key requirement is driver version 525 or later with CUDA 12 support. You do not need to install the CUDA toolkit separately — Ollama includes its own CUDA libraries.

Verify Your NVIDIA Driver

nvidia-smi

If this returns a table showing your GPU and CUDA version 12.x or higher, you’re ready. If not, install drivers via the CUDA repository:

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt update
sudo apt install -y cuda-drivers

After a reboot, confirm GPU detection in the Ollama logs:

journalctl -u ollama | grep -i gpu

AMD GPU Setup on Linux (ROCm)

Ollama supports AMD GPUs via ROCm, AMD’s open-source GPU compute platform. RDNA 2 generation or later (RX 6000 series and newer) is recommended.

sudo apt update
sudo apt install -y wget gnupg
wget https://repo.radeon.com/amdgpu-install/6.1/ubuntu/jammy/amdgpu-install_6.1.60100-1_all.deb
sudo dpkg -i amdgpu-install_6.1.60100-1_all.deb
sudo amdgpu-install --usecase=rocm

Then add the ollama user to the required groups:

sudo usermod -aG render,video ollama

Reboot, then verify ROCm is working:

rocminfo | grep -i "Marketing Name"

Restart Ollama and it will detect the AMD GPU automatically:

sudo systemctl restart ollama

Exposing Ollama on Your Local Network

By default, Ollama only listens on 127.0.0.1:11434. To make it accessible from other devices on your network, set the OLLAMA_HOST environment variable in the systemd service:

sudo systemctl edit ollama

Add:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

Then reload and restart:

sudo systemctl daemon-reload
sudo systemctl restart ollama

Test from another device:

curl http://your-server-ip:11434

You should see Ollama is running. Only expose Ollama to trusted networks — it has no built-in authentication.

Running Ollama Headless on a Server

Ollama is designed to run without a desktop environment. The systemd service runs as the ollama system user in the background with no display server required. From an SSH session, pull and run models as normal:

ollama pull llama3.2
ollama run llama3.2

For non-interactive use in scripts:

echo "Summarise the risks of SQL injection" | ollama run llama3.2

Uninstalling Ollama

sudo systemctl stop ollama
sudo systemctl disable ollama
sudo rm /usr/local/bin/ollama
sudo rm /etc/systemd/system/ollama.service
sudo systemctl daemon-reload
sudo rm -rf /usr/share/ollama

Optionally remove the system user:

sudo userdel ollama
sudo groupdel ollama

Troubleshooting Common Issues

Permission Errors on Startup

GPU device files should be readable by a group the ollama user belongs to. Check ownership:

ls -la /dev/nvidia*
ls -la /dev/dri/*

Add the ollama user to the appropriate group:

sudo usermod -aG render ollama
sudo systemctl restart ollama

systemd Service Not Starting

sudo systemctl status ollama
journalctl -u ollama -n 100

For port conflicts:

sudo ss -tlnp | grep 11434

GPU Not Detected

First confirm the GPU is visible to the system:

# NVIDIA:
nvidia-smi

# AMD:
rocminfo

Check Ollama’s GPU detection output:

journalctl -u ollama | grep -iE "gpu|cuda|rocm|nvidia|amd"

For NVIDIA, verify the driver module is loaded:

lsmod | grep nvidia

If the module isn’t loaded, try sudo modprobe nvidia or reboot. Re-running the Ollama install script after GPU drivers are confirmed working will re-detect the hardware and update the service configuration.

Sign Up For Daily Newsletter

Stay updated with our weekly newsletter. Subscribe now to never miss an update!

[mc4wp_form]

Leave a Reply

Your email address will not be published. Required fields are marked *