Local AI Operating System
The AI that
doesn't guess. Deterministic. Evidence-grounded. Yours.

Esoteric runs entirely on your machine. Deterministic code handles routing, memory, and orchestration. The LLM rephrases facts — nothing more.

Get started
01 — What is Esoteric
— The problem

Every other AI
makes the LLM
the brain.

Every decision — routing, memory, tool selection, orchestration — flows through the model. It guesses its way through every task.

LLM MEMORY ROUTING TOOLS DECISIONS CONTEXT PLANNING
— The inversion

Esoteric makes the LLM
the last thing that runs.

INPUT
deterministic ROUTER
deterministic WORLD STATE
deterministic SPECIALIST
deterministic JUDGE
voice only LLM
OUTPUT

Deterministic code owns every decision. The model rephrases pre-verified facts — nothing more.

— The results
0%
of interactions handled without touching a model
vs cloud: 0%
LLM call per response, at rephrasing stage only
vs cloud: 4–12×
specialist models, each with exactly one job
0 API calls, ever

"The question is what happens when AI stops assuming
the LLM must be the brain."

VERIFY
EVERYTHING.
GUESS NOTHING.

The evidence judge checks every claim before it reaches you. Unverified assertions are stripped, flagged, or discarded — never delivered as confident fact.

Read the architecture