✅ Easiest: Install a Ready-Made Web Interface

1. Open WebUI (formerly Ollama WebUI) – most popular, simple setup

Works with any Ollama model including ministral-3:3b.

Install

docker run -d -p 3000:3000 \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:main

Then open in browser:

➡️ http://localhost:3000

Ollama must be running:

ollama serve &

Use model:

ollama pull ministral:3b

(or the exact tag you want, e.g. ministral-3:3b)

Open WebUI will automatically detect it.


2. Ollama WebUI Lite (no Docker)

If you don’t want Docker:

Install

pip install open-webui
open-webui serve

Browse to:

➡️ http://127.0.0.1:8080


3. Continue.dev (VS Code-like web UI)

docker run -d -p 8000:8000 ghcr.io/continuedev/continue

Supports Ollama as a backend for coding workflows.


🎨 More advanced UIs (specialized)

4. AnythingLLM

A team-oriented interface with memory & agents.

Install:

docker run -d -p 3001:3001 \
  --name anythingllm \
  mintplexlabs/anything-llm

Configure Ollama → pick your ministral model.


5. LM Studio (desktop app UI for local models)

Mac/Windows GUI.
You can import Ollama models or run parallel.

Download: https://lmstudio.ai (does not require Docker)


🛠️ Create Your Own Web Interface

If you want a custom UI:

Minimal HTML/JS Chat UI

Uses the Ollama HTTP API at http://localhost:11434/api/chat.

Create a file index.html:

<!DOCTYPE html>
<html>
<body>
<textarea id="input"></textarea>
<button onclick="send()">Send</button>
<pre id="output"></pre>

<script>
async function send() {
  const msg = document.getElementById('input').value;
  const out = document.getElementById('output');

  const response = await fetch('http://localhost:11434/api/chat', {
    method: "POST",
    headers: {"Content-Type":"application/json"},
    body: JSON.stringify({
      model: "ministral:3b",
      messages: [{role:"user", content: msg}]
    })
  });

  const reader = response.body.getReader();
  let text = "";

  while (true) {
    const { value, done } = await reader.read();
    if (done) break;
    const chunk = new TextDecoder().decode(value);
    text += chunk;
    out.textContent = text;
  }
}
</script>
</body>
</html>

Open in your browser.


💡 Which option should you pick?

GoalBest Option
Simple chat UIOpen WebUI
Full-featured RAG, embeddings, agentsAnythingLLM
No DockerOpen WebUI (pip)
Want to code your ownHTML/JS example above
Coding assistantContinue.dev or LM Studio