Local LLM Setup

Run AI comment generation entirely on your own machine — no API keys, no costs,

complete privacy.


Supported Services

ServiceDefault EndpointDifficultyNotes
Ollamahttp://localhost:11434/api/chatEasyRecommended
LM Studiohttp://localhost:1234/v1/chat/completionsEasyGUI app
oobaboogahttp://localhost:5000/api/chatMediumMost features
GPT4Allhttp://localhost:4891/v1/chat/completionsEasySimplest

Ollama (Recommended)

Install

  curl -fsSL https://ollama.ai/install.sh | sh

Pull a Model and Start

ollama pull mistral        # Good balance of speed and quality

ollama serve

Recommended models for LinkedIn comments:

ModelSizeSpeedQuality
mistral4 GBFast⭐⭐⭐⭐⭐
neural-chat4 GBFast⭐⭐⭐⭐
llama24–7 GBMedium⭐⭐⭐⭐
dolphin-2.62 GBVery fast⭐⭐⭐

Fix CORS (Required for Chrome Extensions)

Ollama blocks browser-origin requests by default. Restart with:

macOS / Linux

OLLAMA_ORIGINS=* ollama serve

Windows PowerShell

$env:OLLAMA_ORIGINS="*"; ollama serve

Windows CMD

set OLLAMA_ORIGINS=*

ollama serve

Configure in Commently

  1. Settings → Use Local LLM
  2. Endpoint: http://localhost:11434/api/chat
  3. Click 🔄 Fetch Models → select your model
  4. 💾 Save Settings

LM Studio

  1. Download from lmstudio.ai
  2. Open LM Studio → browse and download a model (e.g. mistral-7b-instruct)
  3. Go to Local Server tab → select model → Start Server
  4. Configure in Commently:
- Endpoint: http://localhost:1234/v1/chat/completions

oobabooga (Text Generation WebUI)

git clone https://github.com/oobabooga/text-generation-webui

cd text-generation-webui

pip install -r requirements.txt

python server.py --api --listen

Configure in Commently:


GPT4All

  1. Download from nomic.ai/gpt4all
  2. Install, open the app, download a model
  3. Enable the API server in GPT4All settings
  4. Configure in Commently:
- Endpoint: http://localhost:4891/v1/chat/completions

Troubleshooting

"403 Forbidden"

Ollama is blocking the request. See the CORS fix above.

"Failed to fetch" / "Connection refused"

Your LLM service isn't running. Start it and try again.

Comments are slow

Poor comment quality


Hardware Requirements

SetupRAMModel
Minimum8 GBDolphin 3B or Mistral 7B (Q4)
Recommended16 GBMistral 7B or Neural-Chat 7B
Best quality32 GB+Llama2-13B or Hermes-13B