Enki — Limited private preview spots remaining.
Get Early Access →
Quickstart Capabilities FAQ Docs GitHub ↗ Get Early Access →
PRIVATE PREVIEW — V0.1.0

ASYNC
MULTI
AGENT_

Orchestrate high-frequency agent swarms with zero-cost abstractions. Built on Rust + Tokio for maximum throughput. Deploy anywhere_

0
Agents / sec
0
Binary size
0
GitHub stars
enki — runtime@localhost
PROC: ACTIVE
AGENT_POOL 3/3 RUNNING
RESEARCHER
WRITER
CRITIC
VALIDATOR
FORMATTER
[SYSTEM_CAPABILITIES]

ENGINEERED
FOR SCALE

Six core primitives that make Enki the most capable async agent framework available.

01
Zero-cost async concurrency on Rust + Tokio
The Enki runtime is built from scratch on Tokio's async executor. Every agent is a lightweight coroutine — no threads, no blocking, no overhead you didn't ask for.
02
Multi-LLM Routing
Route tasks to any model — GPT-4o, Claude, Gemini, Llama — based on cost, capability, or latency.
03
Secure Tool Calling
Sandboxed environments for agents to run code, query databases, and call APIs without surface exposure.
04
DAG Task Pipelines
Define dependency graphs between agents. Enki executes independent branches simultaneously, collapsing the wall-clock time of complex pipelines.
# Agents run concurrently by default team = Team([ Agent(role='research'), Agent(role='write'), Agent(role='review'), ]) result = await team.run( goal='Ship the report', parallel=True, )
05
Granular Cost Tracking
Per-agent, per-task, per-session token telemetry. Understand exactly where your compute budget goes.
06
Pluggable Memory
Swap agent context backends — Redis, Qdrant, Pinecone, or local filesystem — with a single config change.
[HOW_IT_WORKS]

FOUR STEPS
TO DEPLOY

01
Install the runtime
One command gets you the full Enki binary — Rust native or via Python bindings.
pip install enki-py
02
Define your agents
Each agent gets a role, a model, and optional tools. Agents are pure async functions.
Agent(role="researcher", model="gpt-4o")
03
Compose a team
Group agents into Teams with parallel or DAG execution. Dependency resolution is automatic.
Team([a, b, c], parallel=True)
04
Await the result
One await call. Stream tokens live, collect full output, or export structured traces.
await team.run(goal="...")
Scale without limits
Enki's Tokio runtime handles thousands of concurrent agents on a single machine — no cluster needed.
parallel=True → 50,000 agents/sec
Legacy Frameworks
  • Thread-based concurrency — bottlenecks at scale
  • Python GIL limits true parallelism
  • High memory overhead per agent
  • Complex distributed deployment
  • No built-in cost observability
VS
Enki
  • Async-first Tokio runtime — zero blocking
  • Rust core bypasses GIL entirely
  • Agents are lightweight coroutines — <1KB each
  • Single binary — ship anywhere in seconds
  • Per-token telemetry built into the runtime
[QUICK_START]

RUNNING IN
60 SECONDS

No boilerplate, no config files, no ceremony. Enki is designed for the first 10 minutes to feel effortless.

01

Install

One command via pip or cargo. Python bindings ship with the PyO3 native extension.

02

Create your first agent

Pass a role, a model, and optional tools. Await the result.

03

Compose a team

Add parallel=True to fan-out across multiple agents simultaneously.

terminal
# Python (recommended)
pip install enki-py

# Rust native
cargo add enki

# Verify
enki --version
> enki 0.1.0 (tokio 1.35)
agent.py
from enki import Agent

# Role + model + optional tools
agent = Agent(
    model='gpt-4o',
    role='You are a senior researcher.',
    tools=['web_search', 'code_exec'],
)

result = await agent.run(
    'Summarize the latest AI safety papers'
)
print(result.output)
team.py
from enki import Agent, Team

team = Team([
    Agent(role='researcher', model='gpt-4o'),
    Agent(role='writer',     model='claude-3-5'),
    Agent(role='critic',     model='gemini-2'),
])

# parallel=True fans out all agents at once
result = await team.run(
    goal='Write a deep-dive on RLHF',
    parallel=True,
)
[FAQ]

COMMON
QUESTIONS

Enki is in private preview (v0.1.0). Core APIs are stable — we're actively running it in production. Public beta is targeted Q2 2026. Join the waitlist for early access.
Enki is fully model-agnostic. Out of the box: OpenAI, Anthropic, Google Gemini, Mistral, and any OpenAI-compatible endpoint including local Ollama models. You can mix models freely within a single team.
The core runtime is Rust for zero-cost async and minimal memory. We ship first-class Python bindings via PyO3 — you get Rust performance with Python ergonomics. TypeScript SDK is on the roadmap.
The core framework is and will remain open-source (MIT). A managed cloud offering (Enki Cloud) with usage-based pricing is planned for teams who prefer not to self-host. Details announced at public beta.
Join our Discord for real-time help, or open a GitHub issue. We actively review PRs — especially new memory backends, LLM adapters, and observability integrations.
[DEPLOY]

DEPLOY
INTELLI
GENCE.

Enki is in active development. Join the waitlist for private beta access — we'll transmit your invite when a slot opens.

Enki is currently in active development.
Sign up to get early access to the public beta_
✓  ACCESS_QUEUED — We'll transmit your invite shortly. Stand by_
NO SPAM. ONLY MISSION CRITICAL UPDATES.