Ollama, Docker, OpenWebUI = Private AI
This guide will help you set up a fully self-contained Ollama environment using Docker and OpenWebUI. You’ll get a private AI assistant running locally on your computer that can be optionally accessed from other devices on your network.
What You’ll Get
- Ollama: An easy way to run open-source AI models locally
- A small but capable 7B parameter AI model
- OpenWebUI: A user-friendly interface for interacting with your AI
- Everything running privately on your computer
- Optional access from other devices on your network
Prerequisites
- A computer with Windows, macOS, or Linux
- Administrative access to install software
- At least 8GB of RAM (16GB recommended)
- At least 20GB of free disk space
Always verify code before running on your computer.
Step 1: Install Docker
Our setup script will check if Docker is installed and help you install it if needed.
Step 2: Download the Setup Script
Create a new file named setup-ollama.sh
(for macOS/Linux) or setup-ollama.bat
(for Windows) and copy the code below:
For macOS/Linux Users:
#!/bin/bash
# Define colors for output
GREEN='33[0;32m'
YELLOW='33[1;33m'
RED='33[0;31m'
NC='33[0m' # No Color
# Welcome message
clear
echo -e "${GREEN}================================================${NC}"
echo -e "${GREEN} Ollama Docker Setup Assistant${NC}"
echo -e "${GREEN}================================================${NC}"
echo ""
echo -e "This script will help you set up a self-contained Ollama environment with OpenWebUI."
echo ""
# Check if Docker is installed
echo -e "${YELLOW}Checking if Docker is installed...${NC}"
if command -v docker &> /dev/null; then
echo -e "${GREEN}Docker is installed!${NC}"
else
echo -e "${RED}Docker is not installed.${NC}"
echo "You need to install Docker first."
case "$(uname -s)" in
Darwin)
echo "For macOS, visit: https://docs.docker.com/desktop/install/mac/"
echo "Would you like to open this URL? (y/n)"
read open_url
if [[ "$open_url" == "y" ]]; then
open "https://docs.docker.com/desktop/install/mac/"
fi
;;
Linux)
echo "For Linux, you can install Docker using:"
echo "curl -fsSL https://get.docker.com -o get-docker.sh"
echo "sudo sh get-docker.sh"
echo ""
echo "Would you like to try installing Docker now? (y/n)"
read install_docker
if [[ "$install_docker" == "y" ]]; then
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
echo "Docker installation attempted. Please restart this script after Docker is installed."
exit 0
fi
;;
esac
echo "Please install Docker and run this script again."
exit 1
fi
# Check if Docker Compose is installed
echo -e "${YELLOW}Checking if Docker Compose is installed...${NC}"
if docker compose version &> /dev/null; then
echo -e "${GREEN}Docker Compose is installed!${NC}"
else
echo -e "${RED}Docker Compose is not installed.${NC}"
echo "Please make sure you have Docker Compose installed."
exit 1
fi
# Create directory structure
echo -e "${YELLOW}Setting up directory structure...${NC}"
# Ask for base directory
echo "Where would you like to store your Ollama files?"
echo "Press Enter to use the current directory or specify a path:"
read base_dir
if [ -z "$base_dir" ]; then
base_dir="$(pwd)/ollama-setup"
fi
# Create directory if it doesn't exist
mkdir -p "$base_dir"
echo -e "${GREEN}Using directory: $base_dir${NC}"
# Create subdirectories
mkdir -p "$base_dir/ollama-data"
mkdir -p "$base_dir/openwebui-data"
# Create docker-compose.yml file
echo -e "${YELLOW}Creating Docker Compose configuration...${NC}"
# Ask for model choice
echo "Which model would you like to use? (Choose a number)"
echo "1. Llama 3 8B (Recommended, balanced performance and quality)"
echo "2. Gemma 7B (Google's lightweight model)"
echo "3. Phi-3 Mini (Microsoft's efficient model)"
echo "4. Custom (specify model name)"
read model_choice
case $model_choice in
1)
model_name="llama3:8b"
;;
2)
model_name="gemma:7b"
;;
3)
model_name="phi3:mini"
;;
4)
echo "Enter the model name (e.g., mistral:7b, orca-mini:3b):"
read custom_model
model_name=$custom_model
;;
*)
model_name="llama3:8b"
;;
esac
# Ask if they want network access
echo "Would you like to make your Ollama setup accessible from other devices on your network? (y/n)"
read network_access
if [[ "$network_access" == "y" ]]; then
openwebui_port="3000:8080"
ollama_port="11434:11434"
else
openwebui_port="127.0.0.1:3000:8080"
ollama_port="127.0.0.1:11434:11434"
fi
# Create docker-compose.yml
cat > "$base_dir/docker-compose.yml" << EOF
version: '3'
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
volumes:
- ./ollama-data:/root/.ollama
ports:
- $ollama_port
restart: unless-stopped
openwebui:
image: ghcr.io/open-webui/open-webui:main
container_name: openwebui
volumes:
- ./openwebui-data:/app/backend/data
ports:
- $openwebui_port
environment:
- OLLAMA_API_BASE_URL=http://ollama:11434/api
depends_on:
- ollama
restart: unless-stopped
EOF
echo -e "${GREEN}Configuration file created.${NC}"
# Create run script
cat > "$base_dir/start-ollama.sh" << EOF
#!/bin/bash
cd "$(dirname "$0")"
docker compose up -d
echo "Downloading and setting up $model_name model. This may take a few minutes..."
sleep 10
docker exec -it ollama ollama pull $model_name
echo "Setup complete! Your Ollama environment is now running."
echo "You can access the web interface at: http://localhost:3000"
EOF
chmod +x "$base_dir/start-ollama.sh"
# Create stop script
cat > "$base_dir/stop-ollama.sh" << EOF
#!/bin/bash
cd "$(dirname "$0")"
docker compose down
echo "Ollama environment has been stopped."
EOF
chmod +x "$base_dir/stop-ollama.sh"
echo -e "${GREEN}Setup complete!${NC}"
echo ""
echo -e "To start your Ollama environment, run: ${YELLOW}$base_dir/start-ollama.sh${NC}"
echo -e "To stop your Ollama environment, run: ${YELLOW}$base_dir/stop-ollama.sh${NC}"
echo ""
echo -e "After starting, you can access the web interface at: ${YELLOW}http://localhost:3000${NC}"
echo ""
echo -e "${GREEN}Would you like to start the Ollama environment now? (y/n)${NC}"
read start_now
if [[ "$start_now" == "y" ]]; then
"$base_dir/start-ollama.sh"
fi
echo -e "${GREEN}================================================${NC}"
echo -e "${GREEN} Ollama Setup Complete!${NC}"
echo -e "${GREEN}================================================${NC}"
For Windows Users:
@echo off
setlocal EnableDelayedExpansion
:: Welcome message
cls
echo ================================================
echo Ollama Docker Setup Assistant
echo ================================================
echo.
echo This script will help you set up a self-contained Ollama environment with OpenWebUI.
echo.
:: Check if Docker is installed
echo Checking if Docker is installed...
docker --version > nul 2>&1
if %ERRORLEVEL% neq 0 (
echo Docker is not installed.
echo You need to install Docker Desktop first.
echo.
echo Please visit: https://docs.docker.com/desktop/install/windows-install/
echo.
echo Please install Docker Desktop and run this script again.
pause
exit /b 1
) else (
echo Docker is installed!
)
:: Create directory structure
echo Setting up directory structure...
:: Ask for base directory
set "base_dir=%CD%ollama-setup"
echo Where would you like to store your Ollama files?
echo Press Enter to use [%base_dir%] or specify a path:
set /p "custom_dir="
if not "%custom_dir%"=="" (
set "base_dir=%custom_dir%"
)
:: Create directory if it doesn't exist
if not exist "%base_dir%" (
mkdir "%base_dir%"
)
echo Using directory: %base_dir%
:: Create subdirectories
if not exist "%base_dir%ollama-data" mkdir "%base_dir%ollama-data"
if not exist "%base_dir%openwebui-data" mkdir "%base_dir%openwebui-data"
:: Create docker-compose.yml file
echo Creating Docker Compose configuration...
:: Ask for model choice
echo Which model would you like to use? (Choose a number)
echo 1. Llama 3 8B (Recommended, balanced performance and quality)
echo 2. Gemma 7B (Google's lightweight model)
echo 3. Phi-3 Mini (Microsoft's efficient model)
echo 4. Custom (specify model name)
set /p model_choice=
if "%model_choice%"=="1" (
set "model_name=llama3:8b"
) else if "%model_choice%"=="2" (
set "model_name=gemma:7b"
) else if "%model_choice%"=="3" (
set "model_name=phi3:mini"
) else if "%model_choice%"=="4" (
echo Enter the model name (e.g., mistral:7b, orca-mini:3b):
set /p model_name=
) else (
set "model_name=llama3:8b"
)
:: Ask if they want network access
echo Would you like to make your Ollama setup accessible from other devices on your network? (y/n)
set /p network_access=
if /i "%network_access%"=="y" (
set "openwebui_port=3000:8080"
set "ollama_port=11434:11434"
) else (
set "openwebui_port=127.0.0.1:3000:8080"
set "ollama_port=127.0.0.1:11434:11434"
)
:: Create docker-compose.yml
(
echo version: '3'
echo.
echo services:
echo ollama:
echo image: ollama/ollama:latest
echo container_name: ollama
echo volumes:
echo - ./ollama-data:/root/.ollama
echo ports:
echo - !ollama_port!
echo restart: unless-stopped
echo.
echo openwebui:
echo image: ghcr.io/open-webui/open-webui:main
echo container_name: openwebui
echo volumes:
echo - ./openwebui-data:/app/backend/data
echo ports:
echo - !openwebui_port!
echo environment:
echo - OLLAMA_API_BASE_URL=http://ollama:11434/api
echo depends_on:
echo - ollama
echo restart: unless-stopped
) > "%base_dir%docker-compose.yml"
echo Configuration file created.
:: Create start script
(
echo @echo off
echo cd /d "%%~dp0"
echo docker compose up -d
echo echo Downloading and setting up !model_name! model. This may take a few minutes...
echo timeout /t 10 /nobreak > nul
echo docker exec -it ollama ollama pull !model_name!
echo echo Setup complete! Your Ollama environment is now running.
echo echo You can access the web interface at: http://localhost:3000
echo pause
) > "%base_dir%start-ollama.bat"
:: Create stop script
(
echo @echo off
echo cd /d "%%~dp0"
echo docker compose down
echo echo Ollama environment has been stopped.
echo pause
) > "%base_dir%stop-ollama.bat"
echo Setup complete!
echo.
echo To start your Ollama environment, run: %base_dir%start-ollama.bat
echo To stop your Ollama environment, run: %base_dir%stop-ollama.bat
echo.
echo After starting, you can access the web interface at: http://localhost:3000
echo.
echo Would you like to start the Ollama environment now? (y/n)
set /p start_now=
if /i "%start_now%"=="y" (
call "%base_dir%start-ollama.bat"
)
echo ================================================
echo Ollama Setup Complete!
echo ================================================
pause
Step 3: Run the Setup Script
For macOS/Linux:
- Open Terminal
- Navigate to the directory where you saved the script
- Make the script executable:
chmod +x setup-ollama.sh
- Run the script:
./setup-ollama.sh
For Windows:
- Open Command Prompt or PowerShell as Administrator
- Navigate to the directory where you saved the script
- Run the script:
setup-ollama.bat
Step 4: Follow the Prompts
The script will:
- Check if Docker is installed
- Help you create a directory structure
- Let you choose which AI model to use
- Configure network access settings
- Set up everything automatically
Step 5: Access Your AI Assistant
After the setup completes:
- The script will have started your Ollama environment
- Open your web browser and go to: http://localhost:3000
- You’ll see the OpenWebUI interface
- Select your model and start chatting!
Common Issues and Solutions
“Cannot connect to the Docker daemon”
- Make sure Docker is running
- On Linux, you might need to run the script with sudo
“Error: No such container: ollama”
- The container might not have started properly
- Try running
docker compose up -d
in your ollama directory
Model download is taking too long
- The initial model download can take several minutes depending on your internet speed
- A 7B model is typically 4-5GB in size
Managing Your Setup
Starting Your Environment
Run the start-ollama.sh
(or .bat
) script whenever you want to start your AI assistant.
Stopping Your Environment
Run the stop-ollama.sh
(or .bat
) script to shut down your AI assistant when not in use.
Changing Models
To change models, update the model name in the start script and run it again. You can also manage models via OpenWebUI depending on your preference.
Optional: Adding More Models
If you want to add additional models:
- Start your Ollama environment
- Open a terminal/command prompt
- Run:
docker exec -it ollama ollama pull [model-name]
Replace [model-name]
with models like:
llama3:8b
– Meta’s Llama 3 (8B parameters)phi3:mini
– Microsoft’s Phi-3 Minigemma:7b
– Google’s Gemmamistral:7b
– Mistral AI’s modelneural-chat:7b
– A conversational model
Need Help?
If you run into any issues with this setup, feel free to reach out!