Introduction
agent-sdk-rs is a minimal Rust SDK for tool-using LLM agents.
Design goals:
- explicit agent loop (
query,query_stream) - provider swap without loop rewrite (
ChatModeltrait) - JSON-schema tools + dependency injection
- explicit completion support (
ToolOutcome::Done) - hard safety bounds (
max_iterations)
Current provider adapters:
- Anthropic (
AnthropicModel) - Google Gemini (
GoogleModel) - xAI Grok (
GrokModel)
Core modules:
agent: run loop, events, builderllm: provider interface + adapterstools: tool specs, argument validation, DI maperror: runtime + provider + schema errors
Evidence in repo:
- stop semantics + loop guards:
src/agent/tests.rs - tool schema checks:
src/tools/mod.rs - provider adapters:
src/llm/
Quickstart
Install
[dependencies]
agent-sdk-rs = "0.1"
Basic Query
#![allow(unused)]
fn main() {
use agent_sdk_rs::{Agent, AnthropicModel};
async fn run() -> Result<(), Box<dyn std::error::Error>> {
let model = AnthropicModel::from_env("claude-sonnet-4-5")?;
let mut agent = Agent::builder().model(model).build()?;
let answer = agent.query("Summarize this repo in one line").await?;
println!("{answer}");
Ok(())
}
}
Streaming Query
#![allow(unused)]
fn main() {
use agent_sdk_rs::{Agent, AgentEvent, GoogleModel};
use futures_util::StreamExt;
async fn run() -> Result<(), Box<dyn std::error::Error>> {
let model = GoogleModel::from_env("gemini-2.5-flash")?;
let mut agent = Agent::builder().model(model).build()?;
let stream = agent.query_stream("Solve this step by step");
futures_util::pin_mut!(stream);
while let Some(event) = stream.next().await {
match event? {
AgentEvent::ToolCall { tool, .. } => println!("tool: {tool}"),
AgentEvent::FinalResponse { content } => println!("final: {content}"),
_ => {}
}
}
Ok(())
}
}
Environment Variables
- Anthropic:
ANTHROPIC_API_KEY - Google:
GOOGLE_API_KEYorGEMINI_API_KEY - xAI:
XAI_API_KEYorGROK_API_KEY
Usage Patterns
This section is tuned for fast copy/paste and common agent runtime pitfalls.
Pattern: Explicit Completion for Autonomous Loops
Use require_done_tool(true) when agents should not stop just because the model emits plain text.
#![allow(unused)]
fn main() {
use agent_sdk_rs::{Agent, AnthropicModel};
use agent_sdk_rs::tools::claude_code::all_tools;
async fn run() -> Result<(), Box<dyn std::error::Error>> {
let model = AnthropicModel::from_env("claude-sonnet-4-5")?;
let mut agent = Agent::builder()
.model(model)
.tools(all_tools())
.require_done_tool(true)
.max_iterations(64)
.build()?;
let _ = agent.query("Inspect repository and return risks").await?;
Ok(())
}
}
Pattern: Keep Tool Inputs Strict
Use additionalProperties: false and required fields in each tool schema.
Why:
- prevents silent typo args
- clearer model contract
- safer retries
Common Pitfalls
| Pitfall | Symptom | Fix |
|---|---|---|
| No explicit stop tool for autonomous runs | early/ambiguous completion | add done tool + require_done_tool(true) |
| Loose tool schema | tool receives malformed args | tighten JSON schema + required keys |
| No iteration cap | infinite tool loops | set max_iterations |
| Mixed provider-specific assumptions | adapter swap breaks behavior | stay inside ChatModel + shared Model* types |
Evidence
done+ max-iteration behavior:src/agent/tests.rs- argument validation logic:
src/tools/mod.rs
Comparison
| Capability | agent-sdk-rs | Abstraction-heavy frameworks |
|---|---|---|
| Loop visibility | explicit event stream | often hidden inside planners |
| Tooling | JSON-schema tool contracts | framework-specific wrappers |
| Completion control | optional explicit done path | often implicit stop logic |
| Provider swap | ChatModel adapter boundary | frequently runtime/provider coupled |
| Failure controls | retries + backoff + max iterations | varies per stack |
This crate stays intentionally small:
- easier to embed in existing binaries
- easier to reason about runtime behavior
- easier to test with deterministic mock providers
Examples
Local Scripted Loop
Source of truth from repository:
use std::collections::VecDeque;
use std::error::Error;
use std::sync::Mutex;
use agent_sdk_rs::{
Agent, AgentEvent, ChatModel, ModelCompletion, ModelMessage, ModelToolCall, ModelToolChoice,
ModelToolDefinition, ProviderError, ToolError, ToolOutcome, ToolSpec,
};
use async_trait::async_trait;
use futures_util::StreamExt;
use serde_json::json;
#[derive(Default)]
struct ScriptedModel {
responses: Mutex<VecDeque<Result<ModelCompletion, ProviderError>>>,
}
impl ScriptedModel {
fn new(responses: Vec<Result<ModelCompletion, ProviderError>>) -> Self {
Self {
responses: Mutex::new(VecDeque::from(responses)),
}
}
}
#[async_trait]
impl ChatModel for ScriptedModel {
async fn invoke(
&self,
_messages: &[ModelMessage],
_tools: &[ModelToolDefinition],
_tool_choice: ModelToolChoice,
) -> Result<ModelCompletion, ProviderError> {
let mut guard = self.responses.lock().expect("lock poisoned");
guard.pop_front().unwrap_or_else(|| {
Err(ProviderError::Response(
"scripted model exhausted responses".to_string(),
))
})
}
}
fn add_tool() -> ToolSpec {
ToolSpec::new("add", "add two numbers")
.with_schema(json!({
"type": "object",
"properties": {
"a": {"type": "integer"},
"b": {"type": "integer"}
},
"required": ["a", "b"],
"additionalProperties": false
}))
.expect("valid schema")
.with_handler(|args, _deps| async move {
let a = args
.get("a")
.and_then(|v| v.as_i64())
.ok_or_else(|| ToolError::Execution("a missing".to_string()))?;
let b = args
.get("b")
.and_then(|v| v.as_i64())
.ok_or_else(|| ToolError::Execution("b missing".to_string()))?;
Ok(ToolOutcome::Text((a + b).to_string()))
})
}
fn done_tool() -> ToolSpec {
ToolSpec::new("done", "complete and return")
.with_schema(json!({
"type": "object",
"properties": {
"message": {"type": "string"}
},
"required": ["message"],
"additionalProperties": false
}))
.expect("valid schema")
.with_handler(|args, _deps| async move {
let message = args
.get("message")
.and_then(|v| v.as_str())
.ok_or_else(|| ToolError::Execution("message missing".to_string()))?;
Ok(ToolOutcome::Done(message.to_string()))
})
}
fn build_agent(responses: Vec<Result<ModelCompletion, ProviderError>>) -> Agent {
Agent::builder()
.model(ScriptedModel::new(responses))
.tool(add_tool())
.tool(done_tool())
.build()
.expect("agent builds")
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let mut agent = build_agent(vec![
Ok(ModelCompletion {
text: Some("Working on it".to_string()),
thinking: Some("Need arithmetic".to_string()),
tool_calls: vec![ModelToolCall {
id: "call_1".to_string(),
name: "add".to_string(),
arguments: json!({"a": 2, "b": 3}),
}],
usage: None,
}),
Ok(ModelCompletion {
text: None,
thinking: None,
tool_calls: vec![ModelToolCall {
id: "call_2".to_string(),
name: "done".to_string(),
arguments: json!({"message": "2 + 3 = 5"}),
}],
usage: None,
}),
]);
let final_response = agent.query("What is 2 + 3?").await?;
println!("query final: {final_response}");
let mut streaming_agent = build_agent(vec![
Ok(ModelCompletion {
text: Some("Streaming run".to_string()),
thinking: Some("Will call add and done".to_string()),
tool_calls: vec![ModelToolCall {
id: "call_3".to_string(),
name: "add".to_string(),
arguments: json!({"a": 10, "b": 7}),
}],
usage: None,
}),
Ok(ModelCompletion {
text: None,
thinking: None,
tool_calls: vec![ModelToolCall {
id: "call_4".to_string(),
name: "done".to_string(),
arguments: json!({"message": "10 + 7 = 17"}),
}],
usage: None,
}),
]);
let stream = streaming_agent.query_stream("What is 10 + 7?");
futures_util::pin_mut!(stream);
while let Some(event) = stream.next().await {
match event? {
AgentEvent::MessageStart { message_id, role } => {
println!("message start [{message_id}] {role:?}")
}
AgentEvent::MessageComplete {
message_id,
content,
} => println!("message complete [{message_id}]: {content}"),
AgentEvent::HiddenUserMessage { content } => println!("hidden: {content}"),
AgentEvent::StepStart {
step_id,
title,
step_number,
} => println!("step start [{step_id}] #{step_number} {title}"),
AgentEvent::StepComplete {
step_id,
status,
duration_ms,
} => println!("step complete [{step_id}] {status:?} ({duration_ms} ms)"),
AgentEvent::Thinking { content } => println!("thinking: {content}"),
AgentEvent::Text { content } => println!("text: {content}"),
AgentEvent::ToolCall {
tool,
args_json,
tool_call_id,
} => println!("tool call [{tool_call_id}] {tool}: {args_json}"),
AgentEvent::ToolResult {
tool,
result_text,
tool_call_id,
is_error,
} => println!("tool result [{tool_call_id}] {tool}: {result_text} (error={is_error})"),
AgentEvent::FinalResponse { content } => println!("stream final: {content}"),
}
}
Ok(())
}
Run:
cargo run --example local_loop
Dependency Override
Source of truth from repository:
use std::collections::VecDeque;
use std::error::Error;
use std::sync::Mutex;
use agent_sdk_rs::{
Agent, ChatModel, ModelCompletion, ModelMessage, ModelToolCall, ModelToolChoice,
ModelToolDefinition, ProviderError, ToolOutcome, ToolSpec,
};
use async_trait::async_trait;
use serde_json::json;
#[derive(Default)]
struct ScriptedModel {
responses: Mutex<VecDeque<Result<ModelCompletion, ProviderError>>>,
}
impl ScriptedModel {
fn new(responses: Vec<Result<ModelCompletion, ProviderError>>) -> Self {
Self {
responses: Mutex::new(VecDeque::from(responses)),
}
}
}
#[async_trait]
impl ChatModel for ScriptedModel {
async fn invoke(
&self,
_messages: &[ModelMessage],
_tools: &[ModelToolDefinition],
_tool_choice: ModelToolChoice,
) -> Result<ModelCompletion, ProviderError> {
let mut guard = self.responses.lock().expect("lock poisoned");
guard.pop_front().unwrap_or_else(|| {
Err(ProviderError::Response(
"scripted model exhausted responses".to_string(),
))
})
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let read_dep_tool = ToolSpec::new("read_dep", "read injected value")
.with_schema(json!({
"type": "object",
"properties": {},
"required": [],
"additionalProperties": false
}))?
.with_handler(|_args, deps| {
let value = deps.get::<u32>().map(|v| *v).unwrap_or_default();
async move { Ok(ToolOutcome::Text(value.to_string())) }
});
let done_tool = ToolSpec::new("done", "finish")
.with_schema(json!({
"type": "object",
"properties": {
"message": {"type": "string"}
},
"required": ["message"],
"additionalProperties": false
}))?
.with_handler(|args, _deps| async move {
let message = args
.get("message")
.and_then(|v| v.as_str())
.unwrap_or("done");
Ok(ToolOutcome::Done(message.to_string()))
});
let model = ScriptedModel::new(vec![
Ok(ModelCompletion {
text: None,
thinking: None,
tool_calls: vec![ModelToolCall {
id: "call_1".to_string(),
name: "read_dep".to_string(),
arguments: json!({}),
}],
usage: None,
}),
Ok(ModelCompletion {
text: None,
thinking: None,
tool_calls: vec![ModelToolCall {
id: "call_2".to_string(),
name: "done".to_string(),
arguments: json!({"message": "dependency override applied"}),
}],
usage: None,
}),
]);
let mut agent = Agent::builder()
.model(model)
.tool(read_dep_tool)
.tool(done_tool)
.dependency(1_u32)
.dependency_override(9_u32)
.build()?;
let response = agent.query("use dependency").await?;
println!("final: {response}");
Ok(())
}
Run:
cargo run --example di_override
API Reference
Primary API docs: docs.rs/agent-sdk-rs
When to use:
- method signatures
- trait/type details
- feature-flag specific items