Post

Installing Ollama with Open WebUI in Linux Using Docker Compose

Installing Ollama with Open WebUI in Linux Using Docker Compose

In this guide, we will walk through the installation of Ollama with Open WebUI on both Debian and Arch Linux. We will use Docker Compose instead of Docker to make the process smoother and more efficient.


Arch Linux Setup

1. Update and Install Dependencies:

Begin by updating your system and installing the required dependencies. Open your terminal and run the following commands:

1
2
sudo pacman -Syu
sudo pacman -S nvidia-container-toolkit nvidia-utils docker docker-compose

2. Verify NVIDIA Installation:

Ensure that your NVIDIA drivers and utilities are properly installed:

1
nvidia-smi

3. Install Ollama:

To install Ollama, use the following script from Ollama’s official site:

1
curl -fsSL https://ollama.com/install.sh | sh

4. Install Additional Dependencies (Optional):

You might need additional dependencies, so run:

1
sudo pacman -S <package-name>

Debian Setup

1. Update and Install Dependencies:

Update your system and install the required packages using the following commands:

1
2
sudo apt update && sudo apt upgrade -y
sudo apt install nvidia-container-toolkit nvidia-utils docker docker-compose

2. Verify NVIDIA Installation:

Check if your NVIDIA drivers are working correctly by running:

1
nvidia-smi

3. Install Ollama:

Download and install Ollama using the following script:

1
curl -fsSL https://ollama.com/install.sh | sh

4. Install Additional Dependencies (Optional):

You might need additional packages depending on your setup:

1
sudo apt install <package-name>

Docker Compose Setup

Now that the dependencies are set up on both Debian and Arch Linux, we will configure Docker Compose for running Open WebUI and Ollama.

Create a docker-compose.yml file with the following content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
services:
  open-webui:
    container_name: open-webui
    image: ghcr.io/open-webui/open-webui:latest
    environment:
      - MODEL_DOWNLOAD_DIR=/models
      - OLLAMA_API_BASE_URL=http://ollama:11434
      - OLLAMA_API_URL=http://ollama:11434
      - LOG_LEVEL=debug
    volumes:
      - data:/data
      - models:/models
      - open-webui:/config
    ports:
      - "3000:8080"
    depends_on:
      - ollama
    extra_hosts:
      - "host.docker.internal:host-gateway"
    networks:
      - ollama-net
    restart: unless-stopped

  ollama:
    container_name: ollama
    image: ollama/ollama:latest
    environment:
      - NVIDIA_VISIBLE_DEVICES=all
      - NVIDIA_DRIVER_CAPABILITIES=compute,utility
      - CUDA_VISIBLE_DEVICES=0
      - LOG_LEVEL=debug
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              capabilities: [gpu]
              count: all
    volumes:
      - ollama:/root/.ollama
      - models:/models
    ports:
      - "11434:11434"
    networks:
      - ollama-net
    restart: unless-stopped

  watchtower:
    image: containrrr/watchtower:latest
    container_name: watchtower
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WATCHTOWER_CLEANUP=true
      - WATCHTOWER_POLL_INTERVAL=86400
    networks:
      - ollama-net
    restart: unless-stopped

volumes:
  data:
  models:
  ollama:
  open-webui:

networks:
  ollama-net:

Explanation of the docker-compose.yml:

  • open-webui: This is the main service running the Open WebUI.
    • It uses the ghcr.io/open-webui/open-webui:latest image.
    • The port 3000 on the host is mapped to 8080 inside the container, making the web UI accessible.
    • Volumes are mounted to persist data and configuration across container restarts.
    • The NVIDIA_VISIBLE_DEVICES environment variable ensures the container has access to the GPU.
  • watchtower: This service will monitor the Docker containers and automatically update them when a new version of the image is available.
    • The WATCHTOWER_CLEANUP variable ensures old images are cleaned up.
    • The WATCHTOWER_POLL_INTERVAL defines how often Watchtower checks for updates (86400 seconds = 1 day).
  • Volumes: Data for Ollama and Open WebUI is persisted using Docker volumes ollama and open-webui.

Running the Docker Compose Setup

Once you’ve created the docker-compose.yml file, navigate to its directory and run:

1
docker-compose up -d

This will start both the Open WebUI and Watchtower services in detached mode.


Conclusion

By following these steps, you will have successfully installed Ollama with Open WebUI in a Docker Compose setup on both Arch Linux and Debian. This setup ensures your services are up-to-date and ready to be used for AI-based tasks with GPU support via NVIDIA.

This post is licensed under CC BY 4.0 by the author.