BEAM-NATIVE AGENT ECOSYSTEM

From LLM calls to
autonomous agents

7 composable packages. One unified stack.
Run 10,000+ agents on a single BEAM node.

10,000+
agents/node
~200MB
RAM @ 5k agents
<1ms
message latency
7
packages
PACKAGE ECOSYSTEM 4 layers • composable by design
view all →
jido_coder app

AI coding agent with file operations, git integration, and test execution

jido_ai ai

LLM-powered agents with token/cost tracking, tool calling, and streaming. Combines jido + req_llm + llmdb.

jido_behaviortree ai

Behavior tree execution for complex agent decision-making. Composable nodes, conditions, and actions.

jido core

BEAM-native bot framework. OTP supervision, isolated processes, 10k+ agents per node.

jido_action core

Schema-based action validation. Required fields, defaults, type constraints.

jido_signal core

Pub/sub signaling between agents. Decoupled coordination via message-passing.

req_llm foundation

HTTP client for LLM APIs. Built on Req with retries, rate limiting, and streaming support.

llmdb foundation

Model registry and metadata. Token limits, pricing, capabilities for all major providers.

# dependency flow packages compose bottom-up
┌─────────────┐
jido_coder │ ← AI coding workflows
└──────┬──────┘
┌──────┴──────┐
jido_ai │ ← LLM-powered agents
└──────┬──────┘
┌───────────────┼───────────────┐
│ │ │
┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐
jido │ │ jido_action │ │ jido_signal
└──────┬──────┘ └─────────────┘ └─────────────┘
┌────────────┴────────────┐
│ │
┌──────┴──────┐ ┌───────┴───────┐
req_llm │ │ llmdb
└─────────────┘ └───────────────┘
CHOOSE YOUR STACK
mix.exs
# Full stack: AI coding agents
def deps do
  [
    {:jido_coder, "~> 0.1.0"}
    # includes jido_ai, jido, req_llm, llmdb
  ]
end
WHY BEAM-NATIVE?
Isolated Processes

Each agent runs in its own BEAM process with isolated state. No shared memory, no locks.

OTP Supervision

When agents crash, supervisors restart them in milliseconds. No external orchestrator needed.

Native Concurrency

Preemptive scheduler handles 10k+ agents per node. True parallelism on multi-core.

QUICK START run in less than 2 minutes
lib/my_app/research_agent.ex
LIVEBOOK GITHUB
"syntax-keyword">defmodule ResearchAgent "syntax-keyword">do
"syntax-keyword">use JidoAI.Agent
"syntax-keyword">def init(args) "syntax-keyword">do
{:ok, %{
model: {:openai, "gpt-4"},
budget: 10_000,
topic: args[:topic]
}}
"syntax-keyword">end
"syntax-keyword">def handle_action(:research, state) "syntax-keyword">do
"syntax-keyword">case JidoAI.chat(state, prompt) "syntax-keyword">do
{:ok, response, new_state} ->
{:ok, %{new_state | findings: response}}
{:error, :budget_exceeded} ->
{:error, :out_of_tokens}
"syntax-keyword">end
"syntax-keyword">end
"syntax-keyword">end
# Start 1,000 supervised research agents
"syntax-keyword">for topic <- topics "syntax-keyword">do
JidoAI.start_agent(ResearchAgent, topic: topic)
"syntax-keyword">end

Ready to build?

Start with the getting started guide or explore production examples.