Introduction
AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs) and Ractor. Designed for performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models as well. It is So extensible that other ML Models can be used to create complex pipelines using Actor Framework.
What is AutoAgents?
AutoAgents is a comprehensive framework that allows developers to create AI agents that can:
- Reason: Use advanced reasoning patterns like ReAct (Reasoning and Acting) to break down complex problems
- Act: Execute tools and interact with external systems to accomplish tasks
- Remember: Maintain context and conversation history through flexible memory systems
- Collaborate: Work together in multi-agent environments (coming soon)
✨ Key Features
🤖 Agent Execution
- Multiple Executors: ReAct (Reasoning + Acting) and Basic executors with streaming support
- Structured Outputs: Type-safe JSON schema validation and custom output types
- Memory Systems: Configurable memory backends (sliding window, persistent storage)
🔧 Tool Integration
- Built-in Tools: File operations, web scraping, API calls
- Custom Tools: Easy integration with derive macros
- WASM Runtime: Sandboxed tool execution with cross-platform compatibility
🏗️ Flexible Architecture
- Provider Agnostic: Support for OpenAI, Anthropic, Ollama, and local models
- Multi-Platform: Native Rust, WASM for browsers, and server deployments
- Multi-Agent: Type-safe pub/sub communication and agent orchestration
🌐 Deployment Options
- Native: High-performance server and desktop applications
- Browser: Run agents directly in web browsers via WebAssembly
- Edge: Local inference with ONNX models
Getting Started
Ready to build your first agent? Here's a simple example:
use autoagents::core::agent::memory::SlidingWindowMemory; use autoagents::core::agent::prebuilt::executor::{ReActAgent, ReActAgentOutput}; use autoagents::core::agent::task::Task; use autoagents::core::agent::{AgentBuilder, AgentDeriveT, AgentOutputT, DirectAgent}; use autoagents::core::error::Error; use autoagents::core::tool::{ToolCallError, ToolInputT, ToolRuntime, ToolT}; use autoagents::llm::LLMProvider; use autoagents::llm::backends::openai::OpenAI; use autoagents::llm::builder::LLMBuilder; use autoagents_derive::{agent, tool, AgentHooks, AgentOutput, ToolInput}; use serde::{Deserialize, Serialize}; use serde_json::Value; use std::sync::Arc; #[derive(Serialize, Deserialize, ToolInput, Debug)] pub struct AdditionArgs { #[input(description = "Left Operand for addition")] left: i64, #[input(description = "Right Operand for addition")] right: i64, } #[tool( name = "Addition", description = "Use this tool to Add two numbers", input = AdditionArgs, )] struct Addition {} impl ToolRuntime for Addition { fn execute(&self, args: Value) -> Result<Value, ToolCallError> { println!("execute tool: {:?}", args); let typed_args: AdditionArgs = serde_json::from_value(args)?; let result = typed_args.left + typed_args.right; Ok(result.into()) } } /// Math agent output with Value and Explanation #[derive(Debug, Serialize, Deserialize, AgentOutput)] pub struct MathAgentOutput { #[output(description = "The addition result")] value: i64, #[output(description = "Explanation of the logic")] explanation: String, #[output(description = "If user asks other than math questions, use this to answer them.")] generic: Option<String>, } #[agent( name = "math_agent", description = "You are a Math agent", tools = [Addition], output = MathAgentOutput, )] #[derive(Default, Clone, AgentHooks)] pub struct MathAgent {} impl From<ReActAgentOutput> for MathAgentOutput { fn from(output: ReActAgentOutput) -> Self { let resp = output.response; if output.done && !resp.trim().is_empty() { // Try to parse as structured JSON first if let Ok(value) = serde_json::from_str::<MathAgentOutput>(&resp) { return value; } } // For streaming chunks or unparseable content, create a default response MathAgentOutput { value: 0, explanation: resp, generic: None, } } } pub async fn simple_agent(llm: Arc<dyn LLMProvider>) -> Result<(), Error> { let sliding_window_memory = Box::new(SlidingWindowMemory::new(10)); let agent_handle = AgentBuilder::<_, DirectAgent>::new(ReActAgent::new(MathAgent {})) .llm(llm) .memory(sliding_window_memory) .build() .await?; println!("Running simple_agent with direct run method"); let result = agent_handle.agent.run(Task::new("What is 1 + 1?")).await?; println!("Result: {:?}", result); Ok(()) } #[tokio::main] async fn main() -> Result<(), Error> { // Check if API key is set let api_key = std::env::var("OPENAI_API_KEY").unwrap_or("".into()); // Initialize and configure the LLM client let llm: Arc<OpenAI> = LLMBuilder::<OpenAI>::new() .api_key(api_key) // Set the API key .model("gpt-4o") // Use GPT-4o-mini model .max_tokens(512) // Limit response length .temperature(0.2) // Control response randomness (0.0-1.0) .build() .expect("Failed to build LLM"); let _ = simple_agent(llm).await?; Ok(()) }
Community and Support
AutoAgents is developed by the Liquidos AI team and maintained by a growing community of contributors.
- 📖 Documentation: Comprehensive guides and API reference
- 💬 Discord: Join our community at discord.gg/Ghau8xYn
- 🐛 Issues: Report bugs and request features on GitHub
- 🤝 Contributing: We welcome contributions of all kinds
What's Next?
This documentation will guide you through:
- Installation and Setup: Get AutoAgents running in your environment
- Core Concepts: Understand the fundamental building blocks
- Building Agents: Create your first intelligent agents
- Advanced Features: Explore powerful capabilities
- Real-world Examples: Learn from practical implementations
Let's start building intelligent agents together! 🚀
Installation
This comprehensive guide will help you install AutoAgents and set up your development environment for both using the library and contributing to the project.
Using AutoAgents in Your Project
Prerequisites
Before using AutoAgents, ensure you have:
- Rust 1.70 or later - Install using rustup
- Cargo package manager (comes with Rust)
Verify your installation:
rustc --version
cargo --version
Adding AutoAgents to Your Project
Add AutoAgents to your Cargo.toml
:
Environment Variables
Set up your API keys:
# For OpenAI
export OPENAI_API_KEY="your-openai-api-key"
# For Anthropic
export ANTHROPIC_API_KEY="your-anthropic-api-key"
# For other providers, see the provider-specific documentation
Development Setup
If you want to contribute to AutoAgents or build from source, follow these additional steps:
Additional Prerequisites
- LeftHook - Git hooks manager for code quality
- Cargo Tarpaulin - Test coverage tool (optional)
Installing LeftHook
LeftHook is essential for maintaining code quality and is required for development.
macOS (using Homebrew):
brew install lefthook
Linux (Ubuntu/Debian):
# using npm
npm install -g lefthook
Clone and Setup Repository
# Clone the repository
git clone https://github.com/liquidos-ai/AutoAgents.git
cd AutoAgents
# Install Git hooks using lefthook
lefthook install
# Build the project
cargo build --release
# Run tests to verify setup
cargo test --all-features
Installing Additional Development Tools
# For test coverage (optional)
cargo install cargo-tarpaulin
# For documentation generation
cargo install cargo-doc
# For security auditing (recommended)
cargo install cargo-audit
System Dependencies
macOS
# Install Xcode command line tools (if not already installed)
xcode-select --install
# Install additional dependencies via Homebrew
brew install pkg-config openssl
Linux (Ubuntu/Debian)
sudo apt update
sudo apt install -y \
build-essential \
pkg-config \
libssl-dev \
curl \
git
Windows
Install the following:
- Visual Studio Build Tools or Visual Studio Community with C++ build tools
- Git for Windows
- Windows Subsystem for Linux (WSL) - recommended for better compatibility
Verification
After installation, verify everything is working:
# Check Rust installation
cargo --version
rustc --version
# Check lefthook installation (for development)
lefthook --version
# Build AutoAgents
cd AutoAgents
cargo build --all-features
# Run tests
cargo test --all-features
# Check git hooks are installed (for development)
lefthook run pre-commit
Git Hooks (Development)
The project uses LeftHook to manage Git hooks that ensure code quality:
Pre-commit Hooks
- Formatting:
cargo fmt --check
- Ensures consistent code formatting - Linting:
cargo clippy --all-features --all-targets -- -D warnings
- Catches common mistakes - Testing:
cargo test --all-features
- Runs the test suite - Type Checking:
cargo check --all-features --all-targets
- Validates compilation
Pre-push Hooks
- Full Testing:
cargo test --all-features --release
- Comprehensive test suite - Documentation:
cargo doc --all-features --no-deps
- Ensures docs build correctly
Running Tests with Coverage
# Install tarpaulin if not already installed
cargo install cargo-tarpaulin
# Run tests with coverage
cargo tarpaulin --all-features --out html
Getting Help
If you encounter issues:
- Check our GitHub Issues
- Join our Discord Community
Next Steps
After successful installation:
- Explore Examples: Check out the examples directory
- API Documentation: Browse the API Documentation
- Contributing: See the Contributing Guidelines
For the latest version information, check GitHub.
Quick Start
Your First Agent
In this guide, we'll walk through creating your first AutoAgents agent step by step, explaining each concept in detail. By the end, you'll have a solid understanding of how agents work and how to build them effectively.
Understanding Agents
An agent in AutoAgents is an autonomous entity that can:
- Think: Process information and make decisions
- Act: Execute tools and functions
- Remember: Maintain context across interactions
- Communicate: Provide structured responses
Agent Anatomy
Every AutoAgents agent consists of:
- Agent Struct: The core definition of your agent
- Tools: Functions the agent can call
- Executor: The reasoning pattern (ReAct, Chain-of-Thought, etc.)
- Memory: Context management system
- LLM Provider: The language model backend
Step-by-Step Guide
Architecture Overview
AutoAgents is built with a modular, extensible architecture that prioritizes performance, safety, and developer experience. This document provides a comprehensive overview of the framework's design and core components.
High-Level Architecture
┌─────────────────────────────────────────────────────────────────┐
│ AutoAgents Framework │
├─────────────────────────────────────────────────────────────────┤
│ Application Layer │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Custom Agents │ │ Custom Tools │ │ Custom Memory │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
├─────────────────────────────────────────────────────────────────┤
│ Framework Core │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Environment │ │ Agent System │ │ Tool System │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Memory System │ │ Executors │ │ Protocols │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
├─────────────────────────────────────────────────────────────────┤
│ LLM Providers │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ OpenAI │ │ Anthropic │ │ Ollama │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Google │ │ Groq │ │ xAI │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
├─────────────────────────────────────────────────────────────────┤
│ Foundation Layer │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Derive Macros │ │ Async Runtime │ │ Serialization │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
Agents
Agents are the core building blocks of the AutoAgents framework. They represent autonomous entities that can understand tasks, reason about solutions, execute actions through tools, and provide intelligent responses. This document explores the agent system in detail.
What is an Agent?
An agent in AutoAgents is a software entity that exhibits the following characteristics:
- Autonomy: Can operate independently without constant human intervention
- Reactivity: Responds to changes in their environment
- Proactivity: Can take initiative to achieve goals
- Social Ability: Can interact with other agents and systems
- Intelligence: Uses reasoning patterns to solve problems
Agent Lifecycle
Tools
Memory Systems
Executors
Environment
LLM Providers Overview
AutoAgents supports a wide range of Large Language Model (LLM) providers, allowing you to choose the best fit for your specific use case. This document provides an overview of the supported providers and how to use them.
Supported Providers
AutoAgents currently supports the following LLM providers:
Provider | Status |
---|---|
LiquidEdge (ONNX) | ✅ |
OpenAI | ✅ |
Anthropic | ✅ |
Ollama | ✅ |
DeepSeek | ✅ |
xAI | ✅ |
Phind | ✅ |
Groq | ✅ |
✅ | |
Azure OpenAI | ✅ |
Common Interface
All LLM providers implement the LLMProvider
trait, providing a consistent interface: