Open WebUI is a powerful, self-hosted web interface that gives you a clean, ChatGPT-style browser experience for your local Ollama models. Instead of typing commands into a terminal, you get a full chat interface with conversation history, document uploads, multi-user support, and much more — all running entirely on your own machine.
This guide walks you through installing Open WebUI with Ollama from scratch, covering Docker and Python installation methods, first-time configuration, and the most useful features available once you’re up and running.
What Is Open WebUI?
Open WebUI (formerly known as Ollama WebUI) is an open-source project that provides a feature-rich front end for local language models. It connects to your running Ollama instance and lets you interact with any model you’ve pulled — without ever leaving your browser.
Key highlights include:
- Browser-based chat interface with conversation history
- Support for multiple users with separate accounts
- Document upload and retrieval-augmented generation (RAG)
- Custom system prompts and prompt templates
- Model switching within the same chat interface
- API key management for connecting to external providers
If you haven’t yet installed Ollama itself, start with our introduction to Ollama before continuing.
Prerequisites
Before installing Open WebUI, make sure you have:
- Ollama installed and running — Ollama should be active and accessible on your machine. By default it listens on
http://localhost:11434. - Docker Desktop installed — the easiest way to run Open WebUI is via Docker. Download Docker Desktop from docker.com and make sure it is running.
- At least one model pulled in Ollama (e.g.
ollama pull llama3.2).
If you prefer not to use Docker, there is a Python/pip method covered later in this guide. For those who prefer containers, see our guide on how to run Ollama in Docker.
Installing Open WebUI with Docker
The official recommended installation method is a single Docker command. Open a terminal and run:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Here is what each flag does:
-d— runs the container in detached mode (in the background)-p 3000:8080— maps port 3000 on your machine to port 8080 inside the container. You’ll access the UI athttp://localhost:3000--add-host=host.docker.internal:host-gateway— allows the container to reach services running on your host machine, including Ollama at localhost:11434-v open-webui:/app/backend/data— creates a named Docker volume to persist your data (conversations, users, uploaded documents)--name open-webui— gives the container a memorable name--restart always— restarts the container automatically if your machine reboots
Docker will pull the image (around 1–2 GB on first run) and start the container. Once it’s ready, open your browser and go to http://localhost:3000.
First-Time Setup: Creating Your Admin Account
The first person to visit the Open WebUI URL will be prompted to create an admin account. Fill in a name, email address, and password — this account will have full administrative rights over the installation.
Once logged in, you’ll land on the main chat interface. In the top-left dropdown you should see your available Ollama models listed automatically.
Connecting Open WebUI to Ollama (If Not Auto-Detected)
In most cases Open WebUI will detect Ollama automatically via the host.docker.internal hostname added during installation. If you don’t see any models, you may need to configure the connection manually.
- Click the user icon in the top-right corner and go to Admin Panel.
- Navigate to Settings → Connections.
- Under Ollama API, enter:
http://host.docker.internal:11434 - Click the refresh/test button. You should see a green confirmation that the connection is successful.
- Return to the chat screen — your models should now appear.
If Ollama is running on a different machine on your network, replace host.docker.internal with that machine’s IP address, for example http://192.168.1.50:11434.
Python/Pip Install Alternative (No Docker)
If you’d rather not use Docker, Open WebUI can be installed directly with pip. You’ll need Python 3.11 or later.
pip install open-webui
open-webui serve
This starts Open WebUI on port 8080. Visit http://localhost:8080 in your browser. The pip method is a good choice on machines where Docker isn’t available, though the Docker method is generally more reliable for keeping the app updated and isolated.
Key Features Worth Using
Document Upload and RAG
Open WebUI supports retrieval-augmented generation, meaning you can upload PDFs, text files, and documents and ask questions about their content. Click the paperclip icon in the chat input to attach a file. The document is indexed and relevant sections are automatically included in the context when you ask questions about it.
Model Management
From the admin panel you can pull new Ollama models directly from the web interface without touching the command line. Go to Admin Panel → Models and search for a model name to download it.
Multiple Users
Open WebUI supports multiple accounts with separate conversation histories. As an admin you can invite other users, assign roles (admin or user), and manage access. This makes it ideal for small teams or households where multiple people want to use local AI.
System Prompts and Personas
You can create custom system prompts under Workspace → Prompts. These let you define the behaviour and persona of the model for specific use cases — for example, a formal report writer or a casual coding assistant.
Model Parameters
In the chat sidebar you can adjust model parameters such as temperature, context window size, and system message on a per-conversation basis. This is useful when you want more creative or more deterministic responses without editing a config file.
Accessing Open WebUI From Other Devices
To access Open WebUI from other devices on your local network (a phone or another computer), find your machine’s local IP address:
- Windows: run
ipconfigin a command prompt - Mac/Linux: run
ifconfigor check System Settings → Network
Then visit http://YOUR_IP:3000 from any device on the same network. If you want to expose it over the internet you should add authentication and ideally put it behind a reverse proxy with HTTPS — this is beyond the scope of this guide but is well documented in the Open WebUI documentation.
Updating Open WebUI
To update to the latest version, pull the new image and recreate the container:
docker pull ghcr.io/open-webui/open-webui:main
docker stop open-webui
docker rm open-webui
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Your data is stored in the named volume open-webui and will survive the container being removed and recreated. Conversations, user accounts, and uploaded documents are all preserved.
Troubleshooting Common Issues
Port 3000 Already in Use
If you get an error saying port 3000 is already bound, change the left side of the port mapping to a free port, for example -p 3001:8080. Then access Open WebUI at http://localhost:3001.
No Models Showing in the Interface
This usually means Open WebUI cannot reach Ollama. Check that Ollama is running (ollama list should return results), and verify the connection URL in Admin Panel → Settings → Connections. On some systems you may need to set OLLAMA_HOST=0.0.0.0 as an environment variable before starting Ollama so it binds to all interfaces rather than just localhost.
Slow Responses
Open WebUI itself adds very little overhead — slow responses are almost always due to the model size versus available hardware. Try a smaller model variant (e.g. a 3B or 7B model instead of 13B), or ensure you have enough free RAM. If you have a GPU, verify Ollama is using it by checking ollama ps while a generation is running.
Container Keeps Restarting
Check the logs with docker logs open-webui. Common causes include insufficient disk space for the data volume or a version mismatch between the image and your Docker installation.
Summary
Open WebUI transforms your local Ollama setup from a command-line tool into a fully featured chat application. The Docker install takes less than five minutes, and you immediately get conversation history, document RAG, model management, and multi-user support — all completely offline and private.
For a deeper understanding of how Ollama works under the hood, take a look at our Ollama beginner’s guide.



