Workspace Architecture
This page maps the crates (excluding the CLI and Serve crates) and how they compose so contributors can keep boundaries clear.
Crate Layout
autoagents: Facade crate that re-exports the public surface fromautoagents-core,autoagents-llm, and derive macros; holds only feature wiring and logging initialization.autoagents-core: Agent engine (agent config, executors, memory, protocol/events, vector store traits, runtime abstractions). Usesractoronly on non-WASM targets; compiled out for wasm viacfg.autoagents-llm: Provider-agnostic LLM traits plus concrete backend implementations (OpenAI, Anthropic, Ollama, etc.) and theLLMBuilderto configure them. Purely networking + request/response normalization, no agent logic.autoagents-derive: Proc macros for#[agent],#[tool], and derive helpers (AgentOutput,ToolInput,AgentHooks) that generate glue code while keeping downstream code ergonomic.autoagents-toolkit: Shared, reusable tools and MCP helpers. Feature-gated (filesystem,search,mcp) so downstream crates only pull what they need.autoagents-qdrant: Vector store implementation backed by Qdrant. Implements theVectorStoreIndextrait fromautoagents-coreand depends on an embedding provider viaSharedEmbeddingProvider.- Inference crates (optional):
autoagents-onnx,autoagents-burn, andautoagents-mistral-rsprovide local/runtime-specific inference backends. They plug into the LLM traits but are isolated to keep the core light. autoagents-test-utils: Shared fixtures and helpers for integration tests across crates.examples/*: Runnable end-to-end examples that demonstrate wiring agents, executors, and providers; each example is its own crate to keep dependencies scoped.
Layering and Dependencies
- Top-level dependency direction is
autoagents→ (autoagents-core,autoagents-llm,autoagents-derive). autoagents-coredepends onautoagents-llmfor message/LLM types but keeps provider-specific details out of the core execution logic.autoagents-toolkitandautoagents-qdrantdepend on the core traits and optionally onautoagents-llm/providers for embeddings.- Inference crates implement the LLM traits so they can be swapped with remote providers without changing agent code.
- Examples pull only the crates they exercise (e.g.,
autoagents-qdrantfor vector store examples), which keeps build times predictable and dependencies modular.
Agent/Runtime Flow (non-Serve/CLI)
- Agent definition (usually via
#[agent]fromautoagents-derive) describes tools, output type, and hooks. - Executors in
autoagents-core(Basic, ReAct, direct or actor-backed) drive the conversation loop, calling into:- Memory providers (sliding window, etc.) from
autoagents-core. - Tools (from
autoagents-toolkitor custom example-local tools). - LLM providers implementing the
LLMProvidertraits (autoagents-llmbackends or local inference crates).
- Memory providers (sliding window, etc.) from
- Optional vector store operations go through
VectorStoreIndex(e.g.,autoagents-qdrant). - On non-WASM targets, the actor runtime (
ractor) manages multi-agent orchestration; on WASM targets those pieces arecfg-gated out.
Modularity Guidelines
- Keep provider concerns inside
autoagents-llm(or inference crates); avoid leaking HTTP/provider structs intoautoagents-core. - Add reusable tools to
autoagents-toolkit; example-specific tools should stay within their example crate. - Prefer feature flags on extension crates (
autoagents-toolkit,autoagents-llm, inference crates) so downstream users can opt in without pulling heavy dependencies. - When adding new storage or provider integrations, implement the existing traits (
VectorStoreIndex,EmbeddingProvider,LLMProvider) to preserve swappability and testability.