About AI Lab at Home
Practical guides for running AI on your own hardware — written by someone who actually does this daily. RTX 3090, Ollama, local LLMs, home servers. No cloud required.
What We Cover
- Local LLM Setup — Ollama, model selection, GPU configuration
- Hardware Builds — Home AI server builds, GPU comparisons, power efficiency
- Benchmarks — Real performance data: tokens/sec, VRAM usage, quality scores
- Security — API authentication, Caddy reverse proxy, exposing Ollama safely
- Dev Tools — Integrating local LLMs into apps and workflows
Why Local AI?
Privacy, zero per-token cost, no rate limits, and the ability to run any model you want. The tradeoffs are real — this blog is honest about both the advantages and the limitations.