How to use DeepSeek AI locally using Ollama and Chatbox
This guide will walk you through setting up the Ollama Docker container and ChatBox for running the DeepSeek AI model. It includes all the prerequisites, installation steps, and configuration instructions to this up
Requirements
My Hardware and System Requirements:
- A computer with Docker Desktop installed.
- At least 16GB of system RAM
- Graphics card. I'm using a 1080 card
Software Requirements:
- Docker Desktop (ensure it's installed and running).
- ChatBox application (downloadable from ChatBoxAI).
- WSL2 installed with Docker installed
Notes: You may need to fiddle around with Nvidia Container Toolkit to get this to work in WSL2

Background Information
Ollama is a platform that lets you run powerful AI models locally using Docker containers. In this guide, we'll set up the DeepSeek AI model, specifically version deepseek-r1:8b
, which is optimized for advanced machine learning tasks. ChatBox is a user-friendly interface that allows users to interact with these AI models seamlessly.
Combining these tools allows you to run AI-powered interactions efficiently on your local machine without relying on cloud-based solutions.
Which one to Choose?
- If your focus is on running a private, efficient chatbot or conversational AI locally, Ollama is your go-to choice.
- If you're looking for a model capable of deep querying, advanced AI analysis, or enterprise-grade performance, DeepSeek will suit better.
Comparison Table:
Feature | Ollama | DeepSeek |
---|---|---|
Primary Focus | Chat and conversational AI | Data analysis and insight-seeking |
Ease of Use | User-friendly, quick setup | More technical, advanced users |
Hardware Needs | Low to moderate | Moderate to high |
Ideal For | Personal assistants, chatbots | Research, deep querying |
Local Processing | Optimized for local setups | May require additional resources |
In this guide, I’ll be using the Ollama provider with the DeepSeek model. This setup was experimental but incredibly easy to implement.
Step-by-Step Setup Guide
Step 1: Install Ollama Docker via Docker
Run the following Docker command to install Ollama:
docker run -d --name ollama -p 11434:11434 ollama/ollama
This command:
- Creates a Docker container named
ollama
. - Maps port
11434
for communication.

Step 2: Download and start the DeepSeek Model
Once the Ollama container is running, download the DeepSeek model using the command below:
docker exec -it ollama ollama pull deepseek-r1:8b
This will fetch the DeepSeek AI model, version r1:8b
, from the repository.

Step 3: Validate the Setup
To confirm that Ollama is running and the model is downloaded, use:
docker exec -it ollama ollama list

You should see an output listing the deepseek-r1:8b
model. If it’s listed, everything is set up correctly.
Step 4: Download and Install ChatBox
- Visit the official ChatBoxAI website and download the application.
- Install the downloaded file on your computer.
- Once installed, launch ChatBox. You’ll be greeted with a window similar to this:

- Select the Local Model option
Step 5: Select the DeepSeek Model
In ChatBox, navigate to the model selection interface in settings. Select the deepseek-r1:8b
model from the list. It should look something like this:
Settings Page Example:

You are done. You can validate that your containers are up if you open up Docker desktop and looking at the logs

Final Steps
Once the model is running and ChatBox is configured:
- You’re ready to interact with the DeepSeek AI model through ChatBox.
- Test the integration by asking the model questions or running tasks specific to your needs.
For example, I asked it to create a PowerShell script, even though it's not ideally suited for that task. I was just curious to see how the bot would approach it—and it was pretty cool!

Found this article useful? Why not buy Phi a coffee to show your appreciation?