Home / AI / Ollama / How to Access Ollama Over a Network and Remotely

How to Access Ollama Over a Network and Remotely

Ollama

Why Run Ollama Over a Network?

By default, Ollama only listens on localhost:11434 — requests from other machines are rejected. Enabling network access lets you:

  • Run Ollama on a powerful desktop or server and use it from a laptop
  • Share a single GPU across multiple machines on your LAN
  • Access your local models from a phone or tablet via a web UI
  • Connect VS Code’s Continue extension to a remote Ollama instance
  • Run Ollama in a homelab and query it from anywhere via Tailscale or a VPN

Step 1: Bind Ollama to All Interfaces

To accept connections from other machines, set the OLLAMA_HOST environment variable before starting Ollama:

OLLAMA_HOST=0.0.0.0 ollama serve

This binds Ollama to all network interfaces on port 11434. Any machine that can reach your server’s IP address on that port can now make API requests.

Make it permanent on Linux (systemd)

If Ollama runs as a systemd service, add the environment variable to its service override:

sudo systemctl edit ollama

Add the following in the editor that opens:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

Save, then reload:

sudo systemctl daemon-reload
sudo systemctl restart ollama

Make it permanent on Windows

Set a system environment variable:

  1. Open System Properties → Environment Variables
  2. Under System Variables, click New
  3. Variable name: OLLAMA_HOST, value: 0.0.0.0
  4. Restart the Ollama application

Make it permanent on Mac

Quit the Ollama app, then launch it with the environment variable set via launchd or by starting from the terminal:

OLLAMA_HOST=0.0.0.0 ollama serve

To persist it, create a launchd plist at ~/Library/LaunchAgents/ollama.plist and include the OLLAMA_HOST environment key.

Step 2: Open the Firewall

On most systems, a firewall will block incoming connections on port 11434 by default. You need to allow it explicitly.

Linux (ufw)

# Allow from your LAN only (recommended)
sudo ufw allow from 192.168.1.0/24 to any port 11434

# Or allow from everywhere (not recommended for production)
sudo ufw allow 11434/tcp

Linux (firewalld)

sudo firewall-cmd --add-port=11434/tcp --permanent
sudo firewall-cmd --reload

Windows Defender Firewall

Open Windows Defender Firewall → Advanced Settings → Inbound Rules → New Rule. Create a rule for TCP port 11434, allow the connection, and apply it to the Private network profile.

Step 3: Test the Connection

From another machine on the same network, replace 192.168.1.50 with your Ollama server’s IP address:

curl http://192.168.1.50:11434/api/tags

You should see a JSON response listing your pulled models. If you get a connection refused or timeout, check the firewall rules and that Ollama is bound to the right interface.

For access outside your home network — from a coffee shop, office, or phone — exposing port 11434 directly to the internet is not recommended (Ollama has no built-in authentication). Use a VPN instead.

Tailscale is the easiest option: it creates a private encrypted mesh network across all your devices, no port forwarding required.

  1. Install Tailscale on both the Ollama server and your client machine: tailscale.com/download
  2. Run tailscale up on both devices and log in to the same Tailscale account
  3. Find the Tailscale IP of your Ollama server: tailscale ip -4
  4. Set OLLAMA_HOST=0.0.0.0 on the server (so it accepts connections on the Tailscale interface)
  5. Connect from the client using the Tailscale IP: curl http://100.x.y.z:11434/api/tags

Traffic is end-to-end encrypted and you don’t expose anything to the public internet.

Accessing Ollama Remotely: SSH Tunnel

If you have SSH access to the server, an SSH tunnel is a simple and secure option without Tailscale:

ssh -L 11434:localhost:11434 user@your-server-ip

While this tunnel is open, requests to localhost:11434 on your local machine are forwarded to the server. This requires no firewall changes and no OLLAMA_HOST change — Ollama can remain bound to localhost on the server.

Adding Basic Authentication with Nginx

If you want to expose Ollama to the internet (not recommended without auth), put it behind an Nginx reverse proxy with HTTP basic authentication:

# Create a password file
sudo htpasswd -c /etc/nginx/.htpasswd your-username
server {
    listen 443 ssl;
    server_name ollama.yourdomain.com;

    ssl_certificate     /etc/ssl/certs/your-cert.pem;
    ssl_certificate_key /etc/ssl/private/your-key.pem;

    auth_basic "Ollama";
    auth_basic_user_file /etc/nginx/.htpasswd;

    location / {
        proxy_pass http://127.0.0.1:11434;
        proxy_set_header Host $host;
    }
}

Configure Ollama to only listen on localhost (the default) and let Nginx handle the public-facing side.

CORS: Allowing Browser-Based Apps to Connect

If you’re building a web app that calls Ollama’s API from a browser, you may hit CORS errors. Ollama supports configuring allowed origins:

OLLAMA_ORIGINS=https://myapp.example.com ollama serve

Or to allow all origins during development (not for production):

OLLAMA_ORIGINS=* ollama serve

Connecting Open WebUI to a Remote Ollama

If you’re running Open WebUI as a Docker container, point it at your remote Ollama server:

docker run -d \
  -p 3000:8080 \
  -e OLLAMA_BASE_URL=http://192.168.1.50:11434 \
  --name open-webui \
  ghcr.io/open-webui/open-webui:main

Security Checklist

  • Never expose port 11434 directly to the public internet without authentication
  • Use Tailscale or an SSH tunnel for remote access
  • If using a reverse proxy, require HTTPS and authentication
  • Restrict firewall rules to your LAN subnet for local access
  • Ollama has no built-in rate limiting — an open instance can be abused to run inference at your cost

Sign Up For Daily Newsletter

Stay updated with our weekly newsletter. Subscribe now to never miss an update!

[mc4wp_form]

Leave a Reply

Your email address will not be published. Required fields are marked *