NanoAgent is a micro‑framework (≈ 1 kLOC) for running LLM‑powered agents in pure TypeScript with zero runtime dependencies outside of bun. You only need your favorite chat models: OpenAI, or a local engine like Ollama.
Why another agent runtime?
Model Context Protocol (MCP) is bringing the opportunity to de-clutter agent frameworks. Most features should be tools, retrieval sources, etc. in a standard JSON envelope, then hand that context to any model. NanoAgent focuses on one job: the control loop and leaves RAG, vector search, databases and cloud calls to MCP‑compatible tools. The result is a tiny, transparent core you can audit in an afternoon.
Note that this projects implements a few extensions over the current specifications of MCP and/or tool calling.
AgentState
;
nothing mutates in place.stepAgent
drives exactly one model call →
tool call → state update.await_user
, tool_error
, done
, stopped
.Sequence
objects for wizard‑style flows.import {
type AgentContext,
type AgentState,
type ChatMemory,
ChatModel,
Llama32,
SystemMessage,
ToolRegistry,
UserMessage,
content,
lastMessageIncludes,
loopAgent,
tool,
} from "@hbbio/nanoagent";
// 1) a trivial tool
const echo = tool(
"echo",
"Echo user input back in uppercase",
{
type: "object",
properties: { txt: { type: "string" } },
required: ["txt"],
},
async ({ txt }: { txt: string }) => content(txt.toUpperCase()),
);
// 2) agent context
const ctx: AgentContext<ChatMemory> = {
registry: new ToolRegistry({ echo }),
isFinal: lastMessageIncludes("HELLO"),
};
// 3) initial state
const init: AgentState<ChatMemory> = {
model: new ChatModel(Llama32),
messages: [
SystemMessage(
"You must call the `echo` tool once. Reply very concisely and NEVER ASK any further question to the user!",
),
UserMessage(
"Call the tool with the parameter `hello` and tell me what is the response",
),
],
};
// 4) run and display the whole conversation
const done = await loopAgent(ctx, init);
console.log(done.messages);
Run it with Bun:
bun run examples/echo.ts
Concept | What it holds |
---|---|
AgentState |
Immutable snapshot: model driver, messages, memory, halt |
AgentContext |
Pure hooks: goal test, tool registry, controller, etc. |
stepAgent |
One transition – may call the model and at most one tool |
loopAgent |
While‑loop around stepAgent until a halt condition |
Sequence |
Wrapper that chains multi‑stage flows |
Memory is plain JSON. Tools may patch it by returning
{ memPatch(state)‐>newState }
.
const seq1 = new Sequence(ctxStage1, state1, { maxSteps: 8 });
const { final, history } = await runWorkflow(seq1);
Each stage may produce a fresh context and state; user input handling can be preserved across stages.
NanoAgent ships a tiny MCP server helper (serveMCP
) and an MCP
client (MCPClient
). Your tools can therefore live outside the agent
process—behind an HTTP endpoint—yet feel local.
import { ToolRegistry, serveMCP, tool, content } from "@hbbio/nanoagent";
const tools = {
echo: tool(
"echo",
"Echo input back",
{
type: "object",
properties: { text: { type: "string" } },
required: ["text"],
},
async ({ text }) => content(`Echo: ${text}`),
),
};
serveMCP(new ToolRegistry(tools), 3123); // → http://localhost:3123/v1/…
import { MCPClient, ToolRegistry } from "@hbbio/nanoagent";
const mcp = new MCPClient("http://localhost:3123");
const echoT = await mcp.registeredTool("echo");
const ctx = {
registry: new ToolRegistry({ echoT }),
/* … other AgentContext props … */
};
MCPClient
keeps a 5‑minute cache of the /v1/tools
list and offers:
| Method | Purpose | | -- | | | listTools()
| Discover server capabilities |
| tool(name)
| Fetch a single tool’s JSON‑Schema | |
callTool(name, input, memory?)
| Plain HTTP tool call | |
registeredTool(name)
| Wrap a remote tool so agents can call it seamlessly |
bun add nanoagent # or: npm i nanoagent pnpm add nanoagent yarn add nanoagent
The package is published as ES 2020 modules with type‑definitions included.
export CHATGPT_KEY=...
And then create instances with:
import { ChatModel, ChatGPT4o } from "@hbbio/nanoagent";
const model = new ChatModel(ChatGPT4o);
or one of the predefined model names. Call any present or future model using
chatgpt("name")
.
By default Ollama host is http://localhost:11434
, but you can optionally
define another host:
export OLLAMA_HOST=...
Then run any model, such as:
import { ChatModel, MistralSmall } from "@hbbio/nanoagent";
const model = new ChatModel(MistralSmall);
Pass { debug: true }
to stepAgent
, loopAgent
or Sequence
. You will
see:
STEP id=- msgs=3 last=assistant halted=-
💬 { role: "assistant", … }
💾 memory keys []
Provide your own logger via options.logger
.
Contributions are welcome! Obviously make sure that all tests pass and that coverage includes your new code. I will clarify the coding style guidelines, but try to respect the current project style. No dependency bump proposals, please.
Written by Henri Binsztok and release under the MIT license.