AI Provider Setup

Configure optional AI features for enhanced functionality.

AI is Optional

Neural Commander's core features work without any AI provider. The daemon, project audits, session persistence, and desktop dashboard all function out of the box.

AI features like nc chat, smart suggestions, and natural language queries require either local Ollama or a cloud API key.

Provider Options

Provider Type Cost Privacy
Ollama Recommended Local Free (your hardware) Maximum - data stays local
OpenAI Cloud Pay-per-use Data sent to OpenAI
Anthropic Cloud Pay-per-use Data sent to Anthropic
Azure OpenAI Cloud Pay-per-use Enterprise compliance
🦙

Option A: Ollama (Recommended)

Ollama runs AI models locally on your machine. Free, private, and NC auto-detects it.

1. Install Ollama

# macOS / Linux
curl -fsSL https://ollama.ai/install.sh | sh
# Windows: Download from https://ollama.ai

2. Pull a Model

# Recommended starter model
ollama pull llama3.2
# For code-focused work
ollama pull codellama:7b

3. NC Auto-Detects

No configuration needed! NC automatically checks for Ollama on startup.

$ nc status
Neural Commander v0.99
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Daemon: Running
AI Provider: Ollama (auto-detected)
Models Available: 2
• llama3.2:3b (2.0 GB)
• codellama:7b (3.8 GB)

Recommended Models

Model Size Best For
gemma2:2b 1.6 GB Quick responses, low resources
llama3.2:3b 2.0 GB Balanced performance
codellama:7b 3.8 GB Code-focused tasks
llama3.1:8b 4.7 GB Best quality for most tasks
qwen2.5-coder:7b 4.7 GB Excellent for coding
☁️

Option B: Cloud AI (BYOK)

Bring your own API keys from OpenAI, Anthropic, or other providers. You pay the provider directly - we don't charge for AI access.

OpenAI

nc config set ai.provider openai
nc config set ai.api_key sk-your-key-here

Anthropic (Claude)

nc config set ai.provider anthropic
nc config set ai.api_key sk-ant-your-key-here

BYOK Model: We don't markup API costs. You pay your provider directly at their standard rates.

What Works Without AI?

Works Without AI

  • NC Daemon (monitoring)
  • Session Persistence
  • Project Health Audits
  • NC Desktop Dashboard
  • Documentation Scanner
  • Requirements YAML
  • Git Integration
  • Active Alerts

Requires AI Provider

  • + nc chat (interactive AI)
  • + Pattern Extraction
  • + Smart Suggestions
  • + Natural Language Queries
  • + AI-Enhanced Context
  • + Code Analysis

Troubleshooting

Ollama not detected?

Make sure Ollama is running:

ollama serve

Model too slow?

Try a smaller model like gemma2:2b (1.6 GB) or ensure your machine has enough RAM (8GB+ recommended for 7B models).

Cloud API errors?

Verify your API key is correct and has sufficient credits. Check provider status pages for any outages.

FAQ

Which provider do you recommend?

For privacy, Ollama. For maximum capability, Anthropic Claude or OpenAI GPT-4. Many users start with Ollama and add cloud for complex tasks.

Can I switch providers later?

Yes, anytime. Just update your configuration with nc config set ai.provider [provider].

Do you charge for AI usage?

No. We use a BYOK (Bring Your Own Key) model. Ollama is free (uses your hardware). Cloud providers bill you directly. We don't markup AI costs.

What if I don't configure any AI?

NC works fine without AI. Core features like daemon monitoring, session persistence, project audits, and the desktop dashboard all function normally.

Referral Program

Refer a Dev Friend, Earn Free Months

Know a developer who'd love Neural Commander? Send them an invite and earn Pro credits when they sign up.

Friend starts trial = 1 month free
Friend goes Pro monthly = +1 month free
Friend goes Pro annual = +2 months free
Friend joins Community Edition = 10,000 GIVEKUDOS tokens

Credits stack! Refer multiple friends for even more rewards. Add your Counterparty address to receive GIVEKUDOS tokens.

Earn 10K GIVEKUDOS when your friend joins Community Edition

Friend's details (we'll send them an invite)

We'll email your friend with your personal invite. No spam, ever.