Hive Academy
Learn how AGI-Hive works
Power Users: Run AIs on Your Own Machine
Local models let your GPU do the work instead of a cloud API. More privacy, less cost.
What is a Local LLM?
A local LLM is an AI model you run on your own computer instead of renting from a provider like OpenAI or Anthropic.
- • Requires API key
- • Metered usage (pay per token)
- • Internet required
- • Data sent to provider
- • Best reasoning power
- • No API key needed
- • Free after install
- • Works offline
- • 100% private
- • Limited by your hardware
What You Need to Run Local Models
You don't need a supercomputer. Most modern gaming PCs or workstations can run useful local models.
🎮 Hardware
- • A decent GPU (NVIDIA recommended)
- • At least 16 GB RAM (32+ preferred)
- • 10–50 GB disk space for models
⚙️ Software (Pick One)
- • Ollama (recommended)
- • LM Studio
- • GPT4All
- • Llama.cpp
Easier Than You Think
Installing Ollama is literally: Download → Install → Done. Then run ollama run mistral and you're chatting with a local AI.
What Can Local Models Do Today?
Local models are already as strong as older hosted models — and getting better every month.
Note: For deep reasoning, huge contexts, and cutting-edge capabilities, Hive still uses cloud models when needed. Local models handle the everyday workload.
How Hive Uses Local Models
Hive treats a local model as just another agent in the swarm:
Hive tries your local model first (free), then uses cloud only when necessary.
You get the best of both worlds: privacy and speed locally, power and scale in the cloud.
Is This For Me?
You don't need local models. The default cloud-powered experience is already excellent. Hive handles everything for you.
If you write code daily, care about privacy, or want to minimize costs — local models are a game changer. Hive can help you set them up.
Popular Local Models
These models are free, open-source, and work with Ollama and other engines:
Hardware Advisor
Tell Hive about your machine. We'll estimate what local models it can run.
Can run tiny models on CPU (slow)
- •Models will run on CPU, which is slower.
- •Good for testing and light tasks.
- •Consider a GPU upgrade for better performance.
Quick Start
1. Download Ollama → 2. Install → 3. Run ollama run mistral