← Back to Blog

Getting Started with adk-rust: Your First AI Agent in Rust

Unlock the power of AI agents with Rust. This guide shows you how to build, run, and deploy your first intelligent agent.

October 26, 202312 min read
adk-rustRustAI AgentGoogle GeminiDevelopmentTutorial

Table of Contents

  • 1. Introduction to adk-rust
  • 2. Prerequisites and Setup
  • 3. Creating Your First ADK Project
  • 4. Anatomy of an adk-rust Agent
  • 5. Implementing Your Simple AI Agent
  • 6. Building and Running Your Agent
  • 7. Next Steps

1. Introduction to adk-rust

The Agent Development Challenge

Building intelligent agents capable of complex tasks has long been a domain fraught with challenges. Orchestrating large language models (LLMs), managing state, handling diverse inputs, and deploying these agents effectively often requires bespoke solutions and significant engineering effort. This complexity can be a major hurdle for developers looking to integrate powerful AI capabilities into their applications.

Enter adk-rust: Streamlining AI Agent Development

The Agent Development Kit (ADK) aims to make AI agent development feel more like traditional software development, and `adk-rust` is its robust Rust implementation. It provides a flexible, modular framework designed for developing and deploying sophisticated AI agents. While optimized for Gemini and the Google ecosystem, `adk-rust` is model-agnostic and deployment-agnostic, ensuring compatibility and flexibility across various platforms. This guide will walk you through setting up your environment, creating your first `adk-rust` project, and deploying a basic AI assistant, giving you a solid foundation for your agentic ventures.

2. Prerequisites and Setup

Before we dive into building agents, let's ensure your development environment is ready. You'll need a recent Rust toolchain and the adk-rust library added to your project. adk-rust also requires a Google API key for interacting with models like Gemini. First, make sure you have Rust installed. If not, the recommended way is via rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Ensure your Rust version is 1.75 or higher:
rustc --version
Next, we'll add adk-rust as a dependency. We'll also add tokio for asynchronous operations and dotenv to manage environment variables like your API key. You can add them with cargo add once you've created your project (which we'll do in the next step):
cargo add adk-rust
cargo add tokio --features full
cargo add dotenv
Alternatively, you can manually add these to your Cargo.toml file:
[dependencies]
adk-rust = "0.1"
tokio = { version = "1.40", features = ["full"] }
dotenv = "0.15"
Finally, you'll need a Google API Key for Gemini. Obtain this from the Google AI Studio or Google Cloud Console. Create a .env file in your project's root directory and add your key:
GOOGLE_API_KEY="YOUR_GEMINI_API_KEY_HERE"

3. Creating Your First ADK Project

With your environment set up, let's create a new Rust project, which will serve as the container for our adk-rust agent. We'll use the standard cargo new command to get started. Open your terminal and run:
cargo new my_first_adk_agent
cd my_first_adk_agent
This command initializes a new Rust project named my_first_adk_agent. The generated project structure will look familiar to Rust developers:
my_first_adk_agent/
├── Cargo.toml          # Project manifest and dependencies
└── src/
    └── main.rs         # Your agent's main logic goes here
Now, you can run the cargo add commands from the previous section within this my_first_adk_agent directory. Remember to manually create a .env file in the root directory (my_first_adk_agent/) for your GOOGLE_API_KEY.

4. Anatomy of an adk-rust Agent

An adk-rust agent is typically built around the LlmAgentBuilder and executed using the Launcher. The core idea is to define your agent's persona, instructions, and the LLM it should use. Let's look at the essential components you'll find in an adk-rust agent's src/main.rs file. In adk-rust, an LlmAgent is configured with a description, an instruction (acting as a system prompt for the LLM), and a specific language model (like Gemini). The Launcher then takes this agent and handles the execution, providing an interactive console or a web server interface. This abstraction simplifies the agent's interaction with the outside world. Below is the complete 'Hello World' equivalent for an adk-rust agent. Copy this code into your src/main.rs file. We will break down each part in detail in the next section.
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;

#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
    // 1. Load environment variables from .env file
    dotenv::dotenv().ok();
    
    // 2. Get API key from environment
    let api_key = std::env::var("GOOGLE_API_KEY")
        .expect("GOOGLE_API_KEY environment variable not set");

    // 3. Create the Gemini model
    let model = GeminiModel::new(&api_key, "gemini-2.5-flash")?;

    // 4. Build your agent
    let agent = LlmAgentBuilder::new("my_assistant")
        .description("A helpful AI assistant")
        .instruction("You are a friendly and helpful assistant. Answer questions clearly and concisely.")
        .model(Arc::new(model))
        .build()?;

    // 5. Run the agent with the CLI launcher
    Launcher::new(Arc::new(agent)).run().await?;

    Ok(())
}

5. Implementing Your Simple AI Agent

Now, let's go through the 'Hello World' equivalent for an adk-rust agent, focusing on how each part of the code contributes to creating a functional AI assistant. You should have copied the code from the previous section into your src/main.rs file. 1. **Environment Variable Loading**: dotenv::dotenv().ok(); loads variables from your .env file, making your GOOGLE_API_KEY available within your Rust application. 2. **API Key Retrieval**: std::env::var("GOOGLE_API_KEY") fetches your API key from the environment. The .expect() call ensures the program will panic and provide a clear error message if the key isn't found, preventing silent failures when initializing the model. 3. **Model Initialization**: let model = GeminiModel::new(&api_key, "gemini-2.5-flash")?; instantiates a GeminiModel. Here, we provide our API key and specify the desired Gemini model version (gemini-2.5-flash). This object encapsulates the connection and interaction logic with the actual LLM service, abstracting away the complexities of API calls. 4. **Agent Construction**: This is where you define your agent's core identity and behavior using the LlmAgentBuilder: * LlmAgentBuilder::new("my_assistant"): Starts building an agent, assigning it a unique ID. * .description("A helpful AI assistant"): Provides a human-readable description of your agent's purpose, useful for documentation or higher-level orchestration. * .instruction("You are a friendly and helpful assistant. Answer questions clearly and concisely."): This is your system prompt. It guides the underlying LLM on how to behave, respond, and its overall persona. * .model(Arc::new(model)): Associates the agent with the GeminiModel we created. Arc (Atomic Reference Counted) is a thread-safe reference-counting pointer, used here for shared ownership, which is common in asynchronous Rust applications. * .build()?: Finalizes the agent construction. It returns a Result, and ? is used for convenient error propagation. 5. **Agent Launch**: Launcher::new(Arc::new(agent)).run().await?; takes your configured agent and executes it. The default behavior of Launcher::run() is to start an interactive console where you can type prompts to your agent and see its responses. The await keyword indicates that this is an asynchronous operation, typical for I/O-bound tasks like interacting with an LLM.

6. Building and Running Your Agent

Once your src/main.rs file is populated with the agent code and your Cargo.toml is updated with dependencies, you're ready to bring your agent to life! cargo is your primary tool for this. First, build your project to compile the Rust code into an executable:
cargo build
This command compiles your agent and its dependencies. If successful, you'll have an executable in the target/debug/ directory (or target/release/ if you build with --release). To run your agent in interactive console mode, which is excellent for testing and quick interactions, use:
cargo run
Your terminal will become an interactive chat interface where you can type questions, and your agent will respond using the configured Gemini model. To exit, typically you can type exit or press Ctrl+C. adk-rust also provides a way to run your agent as a web server, allowing you to interact with it via HTTP requests. This is especially useful for integrating your agent into web applications or other services:
cargo run -- serve
By default, this will start the server on http://127.0.0.1:8080. You can specify a different port if needed:
cargo run -- serve --port 3000
Now, your agent is accessible via a web API! You can send requests to it using tools like curl or a web browser to the configured endpoint (e.g., http://127.0.0.1:8080/v1/agent). This makes your AI agent a deployable, callable service.

7. Next Steps

Congratulations! You've successfully built and run your first AI agent using adk-rust. This is just the beginning of what you can achieve with the framework. Here are some suggestions for where to go next: * **Explore ADK Documentation**: Dive deeper into the official adk-rust documentation to understand more advanced features, such as integrating with other components, managing state, or creating more complex agent workflows. * **Custom Tools and Capabilities**: Learn how to equip your agent with custom tools, allowing it to interact with external APIs, databases, or perform specific calculations beyond just text generation. This is a powerful feature for extending agent capabilities. * **Deployment Options**: Investigate different deployment strategies for your adk-rust agents, whether it's containerization with Docker, deploying to cloud platforms, or integrating into existing Rust services. * **Community Engagement**: Join the adk-rust community. Sharing your projects, asking questions, and contributing to discussions can significantly accelerate your learning and problem-solving. Stay connected with the latest developments and best practices.
Getting Started with adk-rust: Your First AI Agent in Rust | ADK-Rust