Home / AI / Ollama / How to Set Up Open WebUI with Ollama (Complete Guide)

How to Set Up Open WebUI with Ollama (Complete Guide)

Ollama

If you have been running Ollama to serve large language models locally, you already know how powerful it is — but typing commands into a terminal every time you want a conversation gets old fast. Open WebUI gives you a full ChatGPT-style browser interface that sits on top of your local Ollama installation. You get a polished chat experience, conversation history, model switching, system prompts, and multi-user support — all running entirely on your own hardware, with no data leaving your network. This guide walks you through every step: installing Open WebUI, connecting it to Ollama, and using it effectively across your devices.

What Is Open WebUI?

Open WebUI is an open-source, self-hosted web interface designed to work with locally running language model servers, with Ollama being the primary target. In practical terms it gives you:

  • A ChatGPT-style chat interface in your browser
  • Persistent conversation history stored in a local database
  • The ability to switch between any Ollama model mid-session
  • System prompt configuration per conversation or globally
  • Multi-user accounts so a whole household or small team can share one server
  • Optional OpenAI API compatibility if you also want to connect cloud models

Everything runs locally. No third-party account, no subscription, and no usage caps beyond what your hardware can handle.

System Requirements

  • Ollama installed and running — verify by visiting http://localhost:11434 in a browser
  • At least one model pulled — run ollama pull llama3.2 first
  • Docker Desktop for Method 1 (recommended)
  • Python 3.11+ for Method 2
  • 4 GB RAM minimum for the Open WebUI container itself

Docker is the recommended route — it handles all dependencies automatically, keeps the installation isolated, and makes upgrades straightforward.

Step 1: Run the Docker Command

docker run -d \
  -p 3000:80 \
  --add-host=host.docker.internal:host-gateway \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:main

What each flag does:

  • -p 3000:80 — access the UI at http://localhost:3000
  • --add-host=host.docker.internal:host-gateway — critical on Linux; lets the container reach Ollama on the host machine
  • -v open-webui:/app/backend/data — persists chat history and settings between restarts
  • --restart always — starts automatically with Docker

Docker will pull the image (around 1–2 GB on first run). Verify it started:

docker ps --filter name=open-webui

Status should show Up. Then open http://localhost:3000 in your browser.

Method 2: Install Open WebUI with Pip

pip install open-webui
open-webui serve

The server listens on port 8080 by default. Open http://localhost:8080. To keep it running after closing the terminal, use screen, tmux, or a systemd service.

First-Time Setup: Creating Your Admin Account

When you visit Open WebUI for the first time, the first account created automatically becomes the administrator. Enter your name, email address, and a password, then click Sign Up. You’ll be logged in immediately.

To manage other users, click your profile icon → Admin Panel. From there you can invite additional users, reset passwords, and control model access.

Connecting Open WebUI to Ollama

In most cases Open WebUI detects Ollama automatically. With Docker and the --add-host flag, the internal URL is http://host.docker.internal:11434. To verify or change it:

  1. Click your profile icon → Admin Panel
  2. Select SettingsConnections
  3. Confirm the Ollama API URL and click the refresh button — a green checkmark confirms a successful connection

If Ollama is running on a different machine on your network, replace localhost with that machine’s local IP address.

Switching Between Models

At the top of the chat window, click the model selector dropdown to see every model currently pulled in Ollama. Select one to start a new conversation. Click the + icon next to the selector to run two models simultaneously and compare their responses side by side.

To pull a new model without leaving the browser: Admin Panel → Settings → Models, then enter the model name (e.g. mistral) in the pull field.

Setting System Prompts

To set a system prompt for an individual conversation:

  1. Start a new chat
  2. Click the Controls button (sliders icon near the message input)
  3. Enter your system prompt in the System Prompt field

For reusable prompts, navigate to Workspace → Prompts to save named prompts you can apply to any new conversation with one click.

Accessing Open WebUI from Other Devices on Your Network

Find the local IP of the machine running Open WebUI:

  • Windows: ipconfig → IPv4 Address
  • macOS: ifconfig | grep "inet "
  • Linux: ip a

Then on any other device on the same network, open http://YOUR-IP:3000 (Docker) or http://YOUR-IP:8080 (pip). Each user can log in with their own account.

Troubleshooting Common Issues

Docker Cannot Find Ollama

  • Missing --add-host flag (Linux): Remove the existing container with docker rm -f open-webui and re-run the full command with the flag included.
  • Ollama not running: Run ollama list to verify. If that fails, start Ollama with ollama serve.

Port Conflict on 3000

Change the host-side port in the -p flag — for example -p 3001:80 — then access Open WebUI at http://localhost:3001.

Models Not Showing Up in the Dropdown

  1. Confirm Ollama is running: open http://localhost:11434 in a browser
  2. Confirm at least one model is pulled: run ollama list
  3. Check the connection URL in Admin Panel → Settings → Connections and click the test button
  4. Check container logs: docker logs open-webui

Keeping Open WebUI Up to Date

To update the Docker installation to the latest version:

docker pull ghcr.io/open-webui/open-webui:main
docker rm -f open-webui

Then re-run the original docker run command. Your conversations and settings stored in the named volume are preserved. For pip:

pip install --upgrade open-webui

Summary

Open WebUI transforms your local Ollama setup from a command-line tool into a proper application that anyone can use. The Docker installation is the most reliable route: one command to install, automatic restarts, and straightforward updates. Once running, the interface is immediately familiar to anyone who has used ChatGPT — but with the key difference that everything stays on your hardware.

Sign Up For Daily Newsletter

Stay updated with our weekly newsletter. Subscribe now to never miss an update!

[mc4wp_form]

Leave a Reply

Your email address will not be published. Required fields are marked *