Introduction
Welcome to AutoAgents, a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs). Designed with performance, safety, and scalability at its core, AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate.
What is AutoAgents?
AutoAgents is a comprehensive framework that allows developers to create AI agents that can:
- Reason: Use advanced reasoning patterns like ReAct (Reasoning and Acting) to break down complex problems
- Act: Execute tools and interact with external systems to accomplish tasks
- Remember: Maintain context and conversation history through flexible memory systems
- Collaborate: Work together in multi-agent environments (coming soon)
Why Choose AutoAgents?
🚀 Performance First
Built in Rust, AutoAgents delivers exceptional performance with:
- Zero-cost abstractions
- Memory safety without garbage collection
- Async/await support for high concurrency
- Efficient tool execution and memory management
🔧 Extensible Architecture
- Modular Design: Plugin-based architecture for easy customization
- Provider Agnostic: Support for multiple LLM providers (OpenAI, Anthropic, Ollama, and more)
- Custom Tools: Easy integration of external tools and services
- Flexible Memory: Configurable memory backends for different use cases
🎯 Developer Experience
- Declarative Macros: Define agents with simple, readable syntax
- Type Safety: Compile-time guarantees with structured outputs
- Rich Tooling: Comprehensive error handling and debugging support
- Extensive Documentation: Clear guides and examples for every feature
🌐 Multi-Provider Support
AutoAgents supports a wide range of LLM providers out of the box:
Provider | Status | Features |
---|---|---|
OpenAI | ✅ | GPT-4, GPT-3.5, Function Calling |
Anthropic | ✅ | Claude 3, Tool Use |
Ollama | ✅ | Local Models, Custom Models |
✅ | Gemini Pro, Gemini Flash | |
Groq | ✅ | Fast Inference |
DeepSeek | ✅ | Code-focused Models |
xAI | ✅ | Grok Models |
Phind | ✅ | Developer-focused |
Azure OpenAI | ✅ | Enterprise Integration |
Key Features
ReAct Framework
AutoAgents implements the ReAct (Reasoning and Acting) pattern, allowing agents to:
- Reason about problems step-by-step
- Act by calling tools and functions
- Observe the results and adjust their approach
Structured Outputs
Define type-safe outputs for your agents using JSON Schema:
#![allow(unused)] fn main() { #[derive(Serialize, Deserialize, ToolInput, Debug)] pub struct WeatherOutput { #[output(description = "Temperature in Celsius")] temperature: f64, #[output(description = "Weather description")] description: String, } }
Tool Integration
Build powerful agents by connecting them to external tools:
- File system operations
- Web scraping and API calls
- Database interactions
- Custom business logic
Memory Systems
Choose from various memory backends:
- Sliding Window: Keep recent conversation history
- Persistent: Long-term memory storage
- Custom: Implement your own memory strategy
Use Cases
AutoAgents is perfect for building:
🤖 AI Assistants
- Customer support chatbots
- Personal productivity assistants
- Domain-specific expert systems
🛠️ Development Tools
- Code generation and review agents
- Automated testing assistants
- Documentation generators
📊 Data Processing
- Document analysis and summarization
- Data extraction and transformation
- Report generation
🔗 Integration Agents
- API orchestration
- Workflow automation
- System monitoring and alerting
Getting Started
Ready to build your first agent? Here's a simple example:
use autoagents::core::agent::prebuilt::react::{ReActAgentOutput, ReActExecutor}; use autoagents::core::agent::{AgentBuilder, AgentDeriveT, AgentOutputT}; use autoagents::core::environment::Environment; use autoagents::core::error::Error; use autoagents::core::memory::SlidingWindowMemory; use autoagents::core::protocol::{Event, TaskResult}; use autoagents::core::runtime::{Runtime, SingleThreadedRuntime}; use autoagents::init_logging; use autoagents::llm::{ToolCallError, ToolInputT, ToolT}; use autoagents::{llm::backends::openai::OpenAI, llm::builder::LLMBuilder}; use autoagents_derive::{agent, tool, AgentOutput, ToolInput}; use colored::*; use serde::{Deserialize, Serialize}; use serde_json::Value; use std::sync::Arc; use tokio_stream::{wrappers::ReceiverStream, StreamExt}; #[derive(Serialize, Deserialize, ToolInput, Debug)] pub struct AdditionArgs { #[input(description = "Left Operand for addition")] left: i64, #[input(description = "Right Operand for addition")] right: i64, } #[tool( name = "Addition", description = "Use this tool to Add two numbers", input = AdditionArgs, )] fn add(args: AdditionArgs) -> Result<i64, ToolCallError> { Ok(args.left + args.right) } /// Math agent output with Value and Explanation #[derive(Debug, Serialize, Deserialize, AgentOutput)] pub struct MathAgentOutput { #[output(description = "The addition result")] value: i64, #[output(description = "Explanation of the logic")] explanation: String, } #[agent( name = "math_agent", description = "You are a Math agent", tools = [Addition], output = MathAgentOutput )] pub struct MathAgent {} impl ReActExecutor for MathAgent {} #[tokio::main] async fn main() -> Result<(), Error> { init_logging(); // Check if API key is set let api_key = std::env::var("OPENAI_API_KEY").unwrap_or("".into()); // Initialize and configure the LLM client let llm: Arc<OpenAI> = LLMBuilder::<OpenAI>::new() .api_key(api_key) // Set the API key .model("gpt-4o") // Use GPT-4o-mini model .max_tokens(512) // Limit response length .temperature(0.2) // Control response randomness (0.0-1.0) .stream(false) // Disable streaming responses .build() .expect("Failed to build LLM"); let sliding_window_memory = Box::new(SlidingWindowMemory::new(10)); let agent = MathAgent {}; let runtime = SingleThreadedRuntime::new(None); let _ = AgentBuilder::new(agent) .with_llm(llm) .runtime(runtime.clone()) .subscribe_topic("test") .with_memory(sliding_window_memory) .build() .await?; // Create environment and set up event handling let mut environment = Environment::new(None); let _ = environment.register_runtime(runtime.clone()).await; let receiver = environment.take_event_receiver(None).await; handle_events(receiver); runtime .publish_message("What is 2 + 2?".into(), "test".into()) .await .unwrap(); runtime .publish_message("What did I ask before?".into(), "test".into()) .await .unwrap(); let _ = environment.run().await; Ok(()) } fn handle_events(event_stream: Option<ReceiverStream<Event>>) { if let Some(mut event_stream) = event_stream { tokio::spawn(async move { while let Some(event) = event_stream.next().await { match event { Event::TaskComplete { result, .. } => { match result { TaskResult::Value(val) => { let agent_out: ReActAgentOutput = serde_json::from_value(val).unwrap(); let math_out: MathAgentOutput = serde_json::from_str(&agent_out.response).unwrap(); println!( "{}", format!( "Math Value: {}, Explanation: {}", math_out.value, math_out.explanation ) .green() ); } _ => { // } } } _ => { // } } } }); } }
Community and Support
AutoAgents is developed by the Liquidos AI team and maintained by a growing community of contributors.
- 📖 Documentation: Comprehensive guides and API reference
- 💬 Discord: Join our community at discord.gg/Ghau8xYn
- 🐛 Issues: Report bugs and request features on GitHub
- 🤝 Contributing: We welcome contributions of all kinds
What's Next?
This documentation will guide you through:
- Installation and Setup: Get AutoAgents running in your environment
- Core Concepts: Understand the fundamental building blocks
- Building Agents: Create your first intelligent agents
- Advanced Features: Explore powerful capabilities
- Real-world Examples: Learn from practical implementations
Let's start building intelligent agents together! 🚀
Installation
This comprehensive guide will help you install AutoAgents and set up your development environment for both using the library and contributing to the project.
Using AutoAgents in Your Project
Prerequisites
Before using AutoAgents, ensure you have:
- Rust 1.70 or later - Install using rustup
- Cargo package manager (comes with Rust)
Verify your installation:
rustc --version
cargo --version
Adding AutoAgents to Your Project
Add AutoAgents to your Cargo.toml
:
Environment Variables
Set up your API keys:
# For OpenAI
export OPENAI_API_KEY="your-openai-api-key"
# For Anthropic
export ANTHROPIC_API_KEY="your-anthropic-api-key"
# For other providers, see the provider-specific documentation
Development Setup
If you want to contribute to AutoAgents or build from source, follow these additional steps:
Additional Prerequisites
- LeftHook - Git hooks manager for code quality
- Cargo Tarpaulin - Test coverage tool (optional)
Installing LeftHook
LeftHook is essential for maintaining code quality and is required for development.
macOS (using Homebrew):
brew install lefthook
Linux (Ubuntu/Debian):
# using npm
npm install -g lefthook
Clone and Setup Repository
# Clone the repository
git clone https://github.com/liquidos-ai/AutoAgents.git
cd AutoAgents
# Install Git hooks using lefthook
lefthook install
# Build the project
cargo build --release
# Run tests to verify setup
cargo test --all-features
Installing Additional Development Tools
# For test coverage (optional)
cargo install cargo-tarpaulin
# For documentation generation
cargo install cargo-doc
# For security auditing (recommended)
cargo install cargo-audit
System Dependencies
macOS
# Install Xcode command line tools (if not already installed)
xcode-select --install
# Install additional dependencies via Homebrew
brew install pkg-config openssl
Linux (Ubuntu/Debian)
sudo apt update
sudo apt install -y \
build-essential \
pkg-config \
libssl-dev \
curl \
git
Windows
Install the following:
- Visual Studio Build Tools or Visual Studio Community with C++ build tools
- Git for Windows
- Windows Subsystem for Linux (WSL) - recommended for better compatibility
Verification
After installation, verify everything is working:
# Check Rust installation
cargo --version
rustc --version
# Check lefthook installation (for development)
lefthook --version
# Build AutoAgents
cd AutoAgents
cargo build --all-features
# Run tests
cargo test --all-features
# Check git hooks are installed (for development)
lefthook run pre-commit
Git Hooks (Development)
The project uses LeftHook to manage Git hooks that ensure code quality:
Pre-commit Hooks
- Formatting:
cargo fmt --check
- Ensures consistent code formatting - Linting:
cargo clippy --all-features --all-targets -- -D warnings
- Catches common mistakes - Testing:
cargo test --all-features
- Runs the test suite - Type Checking:
cargo check --all-features --all-targets
- Validates compilation
Pre-push Hooks
- Full Testing:
cargo test --all-features --release
- Comprehensive test suite - Documentation:
cargo doc --all-features --no-deps
- Ensures docs build correctly
Running Tests with Coverage
# Install tarpaulin if not already installed
cargo install cargo-tarpaulin
# Run tests with coverage
cargo tarpaulin --all-features --out html
Getting Help
If you encounter issues:
- Check our GitHub Issues
- Join our Discord Community
Next Steps
After successful installation:
- Explore Examples: Check out the examples directory
- API Documentation: Browse the API Documentation
- Contributing: See the Contributing Guidelines
For the latest version information, check GitHub.
Quick Start
Your First Agent
In this guide, we'll walk through creating your first AutoAgents agent step by step, explaining each concept in detail. By the end, you'll have a solid understanding of how agents work and how to build them effectively.
Understanding Agents
An agent in AutoAgents is an autonomous entity that can:
- Think: Process information and make decisions
- Act: Execute tools and functions
- Remember: Maintain context across interactions
- Communicate: Provide structured responses
Agent Anatomy
Every AutoAgents agent consists of:
- Agent Struct: The core definition of your agent
- Tools: Functions the agent can call
- Executor: The reasoning pattern (ReAct, Chain-of-Thought, etc.)
- Memory: Context management system
- LLM Provider: The language model backend
Step-by-Step Guide
Architecture Overview
AutoAgents is built with a modular, extensible architecture that prioritizes performance, safety, and developer experience. This document provides a comprehensive overview of the framework's design and core components.
High-Level Architecture
┌─────────────────────────────────────────────────────────────────┐
│ AutoAgents Framework │
├─────────────────────────────────────────────────────────────────┤
│ Application Layer │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Custom Agents │ │ Custom Tools │ │ Custom Memory │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
├─────────────────────────────────────────────────────────────────┤
│ Framework Core │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Environment │ │ Agent System │ │ Tool System │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Memory System │ │ Executors │ │ Protocols │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
├─────────────────────────────────────────────────────────────────┤
│ LLM Providers │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ OpenAI │ │ Anthropic │ │ Ollama │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Google │ │ Groq │ │ xAI │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
├─────────────────────────────────────────────────────────────────┤
│ Foundation Layer │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Derive Macros │ │ Async Runtime │ │ Serialization │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
Agents
Agents are the core building blocks of the AutoAgents framework. They represent autonomous entities that can understand tasks, reason about solutions, execute actions through tools, and provide intelligent responses. This document explores the agent system in detail.
What is an Agent?
An agent in AutoAgents is a software entity that exhibits the following characteristics:
- Autonomy: Can operate independently without constant human intervention
- Reactivity: Responds to changes in their environment
- Proactivity: Can take initiative to achieve goals
- Social Ability: Can interact with other agents and systems
- Intelligence: Uses reasoning patterns to solve problems
Agent Lifecycle
Creation → Registration → Task Assignment → Execution → Response → Cleanup
1. Creation Phase
#![allow(unused)] fn main() { #[derive(Serialize, Deserialize, ToolInput, Debug)] pub struct AdditionArgs { #[input(description = "Left Operand for addition")] left: i64, #[input(description = "Right Operand for addition")] right: i64, } #[tool( name = "Addition", description = "Use this tool to Add two numbers", input = AdditionArgs, )] fn add(args: AdditionArgs) -> Result<i64, ToolCallError> { Ok(args.left + args.right) } /// Math agent output with Value and Explanation #[derive(Debug, Serialize, Deserialize, AgentOutput)] pub struct MathAgentOutput { #[output(description = "The addition result")] value: i64, #[output(description = "Explanation of the logic")] explanation: String, } #[agent( name = "math_agent", description = "You are a Math agent", tools = [Addition], executor = ReActExecutor, output = MathAgentOutput )] pub struct MathAgent {} let sliding_window_memory = Box::new(SlidingWindowMemory::new(10)); let agent = MathAgent {}; let runtime = SingleThreadedRuntime::new(None); let agent = AgentBuilder::new(agent) .with_llm(llm) .runtime(runtime.clone()) .subscribe_topic("test") .with_memory(sliding_window_memory) .build() .await?; }
2. Registration Phase
#![allow(unused)] fn main() { // Create environment and set up event handling let mut environment = Environment::new(None); let _ = environment.register_runtime(runtime.clone()).await; }
3. Task Assignment Phase
#![allow(unused)] fn main() { runtime .publish_message("What is 2 + 2?".into(), "test".into()) .await .unwrap(); }
4. Execution Phase
#![allow(unused)] fn main() { environment.run(); }
Tools
Memory Systems
Executors
Environment
LLM Providers Overview
AutoAgents supports a wide range of Large Language Model (LLM) providers, allowing you to choose the best fit for your specific use case. This document provides an overview of the supported providers and how to use them.
Supported Providers
AutoAgents currently supports the following LLM providers:
Provider | Status | Models | Features |
---|---|---|---|
OpenAI | ✅ | GPT-4, GPT-3.5-turbo | Function calling, streaming, vision |
Anthropic | ✅ | Claude 3.5 Sonnet, Claude 3 Opus/Haiku | Tool use, long context |
Ollama | ✅ | Llama 3, Mistral, CodeLlama | Local inference, custom models |
✅ | Gemini Pro, Gemini Flash | Multimodal, fast inference | |
Groq | ✅ | Llama 3, Mixtral | Ultra-fast inference |
DeepSeek | ✅ | DeepSeek Coder, DeepSeek Chat | Code-specialized models |
xAI | ✅ | Grok-1, Grok-2 | Real-time information |
Phind | ✅ | CodeLlama variants | Developer-focused |
Azure OpenAI | ✅ | GPT-4, GPT-3.5 | Enterprise features |
Common Interface
All LLM providers implement the LLMProvider
trait, providing a consistent interface: