Quickstart
This guide shows you how to get up and running with ADK-Rust. You'll create your first AI agent in under 10 minutes.
Prerequisites
Before you start, make sure you have:
- Rust 1.85.0 or later (
rustup update stable) - A Google API key for Gemini
Step 1: Create a New Project
Create a new Rust project:
cargo new my_agent
cd my_agent
Your project structure will look like this:
my_agent/
├── Cargo.toml
├── src/
│ └── main.rs
└── .env # You'll create this for your API key
Step 2: Add Dependencies
Update your Cargo.toml with the required dependencies:
[package]
name = "my_agent"
version = "0.1.0"
edition = "2024"
[dependencies]
adk-rust = "0.2.0"
tokio = { version = "1.40", features = ["full"] }
dotenvy = "0.15"
Install the dependencies:
cargo build
Step 3: Set Up Your API Key
This project uses the Gemini API, which requires an API key. If you don't have one, create a key in Google AI Studio.
Create a .env file in your project root:
Linux / macOS:
echo 'GOOGLE_API_KEY=your-api-key-here' > .env
Windows (PowerShell):
echo GOOGLE_API_KEY=your-api-key-here > .env
Security Tip: Add
.envto your.gitignoreto avoid committing your API key.
Step 4: Write Your Agent
Replace the contents of src/main.rs with:
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
// Load environment variables from .env file
dotenvy::dotenv().ok();
// Get API key from environment
let api_key = std::env::var("GOOGLE_API_KEY")
.expect("GOOGLE_API_KEY environment variable not set");
// Create the Gemini model
let model = GeminiModel::new(&api_key, "gemini-2.5-flash")?;
// Build your agent
let agent = LlmAgentBuilder::new("my_assistant")
.description("A helpful AI assistant")
.instruction("You are a friendly and helpful assistant. Answer questions clearly and concisely.")
.model(Arc::new(model))
.build()?;
// Run the agent with the CLI launcher
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}
Step 5: Run Your Agent
Start your agent in interactive console mode:
cargo run
You'll see a prompt where you can chat with your agent:
🤖 Agent ready! Type your questions (or 'exit' to quit).
You: Hello! What can you help me with?
Assistant: Hello! I'm a helpful AI assistant. I can help you with:
- Answering questions on various topics
- Explaining concepts
- Providing information and suggestions
- Having a friendly conversation
What would you like to know?
You: exit
👋 Goodbye!
Step 6: Add a Tool
Let's enhance your agent with the Google Search tool to give it access to real-time information:
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let api_key = std::env::var("GOOGLE_API_KEY")
.expect("GOOGLE_API_KEY environment variable not set");
let model = GeminiModel::new(&api_key, "gemini-2.5-flash")?;
// Build agent with Google Search tool
let agent = LlmAgentBuilder::new("search_assistant")
.description("An assistant that can search the web")
.instruction("You are a helpful assistant. Use the search tool to find current information when needed.")
.model(Arc::new(model))
.tool(Arc::new(GoogleSearchTool::new())) // Add search capability
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}
Start your agent again in interactive console mode:
cargo run
Now you can prompt your agent to search the web:
You: What's the weather like in Tokyo today?
Assistant: Let me search for that information...
[Using GoogleSearchTool]
Based on current information, Tokyo is experiencing...
Running as a Web Server
For a web-based interface, run with the serve command:
cargo run -- serve
This starts the server on the default port 8080. Access it at http://localhost:8080.
To specify a custom port:
cargo run -- serve --port 3000
This starts the server on port 3000. Access it at http://localhost:3000.
Understanding the Code
Let's break down what each part does:
Imports
use adk_rust::prelude::*; // GeminiModel, LlmAgentBuilder, Arc, etc.
use adk_rust::Launcher; // CLI launcher for console/server modes
use std::sync::Arc; // Thread-safe reference counting pointer
prelude::*imports all commonly used types:GeminiModel,LlmAgentBuilder,Arc, error types, and moreLauncherprovides the CLI interface for running agentsArc(Atomic Reference Counted) enables safe sharing of the model and agent across async tasks
Model Creation
let model = GeminiModel::new(&api_key, "gemini-2.5-flash")?;
Creates a Gemini model instance that implements the Llm trait. The model:
- Handles authentication with your API key
- Manages streaming responses from the LLM
- Supports function calling for tools
Agent Building
let agent = LlmAgentBuilder::new("my_assistant")
.description("A helpful AI assistant")
.instruction("You are a friendly assistant...")
.model(Arc::new(model))
.build()?;
The builder pattern configures your agent:
| Method | Purpose |
|---|---|
new("name") | Sets the agent's unique identifier (used in logs and multi-agent systems) |
description() | Brief description shown in agent cards and A2A protocol |
instruction() | System prompt - defines the agent's personality and behavior |
model(Arc::new(...)) | Wraps the model in Arc for thread-safe sharing |
tool(Arc::new(...)) | (Optional) Adds tools/functions the agent can call |
build() | Validates configuration and creates the agent instance |
Launcher
Launcher::new(Arc::new(agent)).run().await?;
The Launcher handles the runtime:
- Console mode (default): Interactive chat in your terminal
- Server mode (
-- serve): REST API with web interface - Manages session state, streaming responses, and graceful shutdown
Using Other Models
ADK-Rust supports multiple LLM providers out of the box. Enable them via feature flags in your Cargo.toml:
[dependencies]
adk-rust = { version = "0.2.0", features = ["openai", "anthropic", "deepseek", "groq", "ollama"] }
Set the appropriate API key for your provider:
# OpenAI
export OPENAI_API_KEY="your-api-key"
# Anthropic
export ANTHROPIC_API_KEY="your-api-key"
# DeepSeek
export DEEPSEEK_API_KEY="your-api-key"
# Groq
export GROQ_API_KEY="your-api-key"
# Ollama (no key needed, just run: ollama serve)
OpenAI
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let api_key = std::env::var("OPENAI_API_KEY")?;
let model = OpenAIClient::new(OpenAIConfig::new(api_key, "gpt-4o"))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}
Anthropic
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let api_key = std::env::var("ANTHROPIC_API_KEY")?;
let model = AnthropicClient::new(AnthropicConfig::new(api_key, "claude-sonnet-4-20250514"))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}
DeepSeek
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let api_key = std::env::var("DEEPSEEK_API_KEY")?;
// Standard chat model
let model = DeepSeekClient::chat(api_key)?;
// Or use reasoner for chain-of-thought reasoning
// let model = DeepSeekClient::reasoner(api_key)?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}
Groq (Ultra-Fast Inference)
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let api_key = std::env::var("GROQ_API_KEY")?;
let model = GroqClient::new(GroqConfig::llama70b(api_key))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}
Ollama (Local Models)
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
// Requires: ollama serve && ollama pull llama3.2
let model = OllamaModel::new(OllamaConfig::new("llama3.2"))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Launcher::new(Arc::new(agent)).run().await?;
Ok(())
}
Supported Models
| Provider | Model Examples | Feature Flag |
|---|---|---|
| Gemini | gemini-3-pro-preview, gemini-3-flash-preview, gemini-2.5-flash, gemini-2.5-pro, gemini-2.0-flash | (default) |
| OpenAI | gpt-5.2, gpt-5.2-mini, gpt-5-mini, gpt-5-nano, gpt-4.1, gpt-4.1-mini, o3-mini, gpt-4o, gpt-4o-mini | openai |
| Anthropic | claude-sonnet-4-5, claude-haiku-4-5, claude-opus-4-5, claude-sonnet-4, claude-opus-4, claude-haiku-4 | anthropic |
| DeepSeek | deepseek-chat, deepseek-reasoner | deepseek |
| Groq | gpt-oss-120b, qwen3-32b, llama-3.3-70b-versatile, mixtral-8x7b-32768 | groq |
| Ollama | gemma3, qwen2.5, llama3.2, mistral, phi4, codellama | ollama |
Next Steps
Now that you have your first agent running, explore these topics:
- LlmAgent Configuration - All configuration options
- Function Tools - Create custom tools
- Workflow Agents - Build multi-step pipelines
- Sessions - Manage conversation state
- Callbacks - Customize agent behavior
Previous: Introduction | Next: LlmAgent