工作流代理
工作流代理以可预测的模式(顺序管道、并行执行或迭代循环)编排多个代理。与使用 AI 推理的 LlmAgent 不同,工作流代理遵循确定性执行路径。
快速开始
创建一个新项目:
cargo new workflow_demo
cd workflow_demo
将依赖项添加到 Cargo.toml:
[dependencies]
adk-rust = "0.2.0"
tokio = { version = "1.40", features = ["full"] }
dotenvy = "0.15"
创建 .env:
echo 'GOOGLE_API_KEY=your-api-key' > .env
SequentialAgent
SequentialAgent 依次运行子 Agent。每个 Agent 都会看到来自前一个 Agent 累积的对话历史记录。
何时使用
- 多步骤的管道,其中一个步骤的输出会作为下一个步骤的输入
- 研究 → 分析 → 总结 的工作流
- 数据转换链
完整示例
替换 src/main.rs:
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let api_key = std::env::var("GOOGLE_API_KEY")?;
let model = Arc::new(GeminiModel::new(&api_key, "gemini-2.5-flash")?);
// Step 1: Research agent gathers information
let researcher = LlmAgentBuilder::new("researcher")
.instruction("Research the given topic. List 3-5 key facts or points. \
Be factual and concise.")
.model(model.clone())
.output_key("research") // Saves output to state
.build()?;
// Step 2: Analyzer agent identifies patterns
let analyzer = LlmAgentBuilder::new("analyzer")
.instruction("Based on the research above, identify 2-3 key insights \
or patterns. What's the bigger picture?")
.model(model.clone())
.output_key("analysis")
.build()?;
// Step 3: Summarizer creates final output
let summarizer = LlmAgentBuilder::new("summarizer")
.instruction("Create a brief executive summary combining the research \
and analysis. Keep it under 100 words.")
.model(model.clone())
.build()?;
// Create the sequential pipeline
let pipeline = SequentialAgent::new(
"research_pipeline",
vec![Arc::new(researcher), Arc::new(analyzer), Arc::new(summarizer)],
).with_description("Research → Analyze → Summarize");
println!("📋 Sequential Pipeline: Research → Analyze → Summarize");
println!();
Launcher::new(Arc::new(pipeline)).run().await?;
Ok(())
}
运行它:
cargo run
示例交互
你:告诉我关于 Rust 编程语言的信息
🔄 [researcher] 正在研究...
以下是关于 Rust 的主要事实:
1. 2010 年由 Mozilla 创建的系统编程语言
2. 通过所有权系统实现内存安全,无需垃圾回收
3. 零成本抽象和最小运行时
4. 连续 7 年被 Stack Overflow 评为“最受喜爱语言”
5. 被 Firefox、Discord、Dropbox 和 Linux 内核使用
🔄 [analyzer] 正在分析...
主要见解:
1. Rust 解决了内存安全与性能之间的权衡问题
2. 强大的开发者满意度推动了其快速采用
3. 大型科技公司的信任验证了其生产就绪性
🔄 [summarizer] 正在总结...
Rust 是一种系统语言,通过其所有权模型实现内存安全,无需垃圾回收。它于 2010 年由 Mozilla 创建,已连续 7 年被评为最受喜爱的语言。Discord 和 Linux 内核等主要公司因其零成本抽象和性能保证而采用它。
工作原理
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ 研究员 │ → │ 分析器 │ → │ 总结器 │
│ (步骤 1)│ │ (步骤 2)│ │ (步骤 3)│
└─────────────┘ └─────────────┘ └─────────────┘
↓ ↓ ↓
“主要事实...” “见解...” “执行摘要”
- 用户消息发送给第一个 Agent(研究员)
- 研究员的响应被添加到历史记录中
- 分析器看到:用户消息 + 研究员响应
- 总结器看到:用户消息 + 研究员 + 分析器响应
- 当最后一个 Agent 完成时,管道完成
ParallelAgent
ParallelAgent 并行运行所有子代理。每个代理接收相同的输入并独立工作。
何时使用
- 针对同一主题的多种视角
- 扇出处理(相同输入,不同分析)
- 对速度要求高的多任务场景
完整示例
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let api_key = std::env::var("GOOGLE_API_KEY")?;
let model = Arc::new(GeminiModel::new(&api_key, "gemini-2.5-flash")?);
// Three analysts with DISTINCT personas (important for parallel execution)
let technical = LlmAgentBuilder::new("technical_analyst")
.instruction("You are a senior software architect. \
FOCUS ONLY ON: code quality, system architecture, scalability, \
security vulnerabilities, and tech stack choices. \
Start your response with '🔧 TECHNICAL:' and give 2-3 bullet points.")
.model(model.clone())
.build()?;
let business = LlmAgentBuilder::new("business_analyst")
.instruction("You are a business strategist and MBA graduate. \
FOCUS ONLY ON: market opportunity, revenue model, competition, \
cost structure, and go-to-market strategy. \
Start your response with '💼 BUSINESS:' and give 2-3 bullet points.")
.model(model.clone())
.build()?;
let user_exp = LlmAgentBuilder::new("ux_analyst")
.instruction("You are a UX researcher and designer. \
FOCUS ONLY ON: user journey, accessibility, pain points, \
visual design, and user satisfaction metrics. \
Start your response with '🎨 UX:' and give 2-3 bullet points.")
.model(model.clone())
.build()?;
// Create parallel agent
let multi_analyst = ParallelAgent::new(
"multi_perspective",
vec![Arc::new(technical), Arc::new(business), Arc::new(user_exp)],
).with_description("Technical + Business + UX analysis in parallel");
println!("⚡ Parallel Analysis: Technical | Business | UX");
println!(" (All three run simultaneously!)");
println!();
Launcher::new(Arc::new(multi_analyst)).run().await?;
Ok(())
}
💡 提示: 使并行代理的指令高度明确,具有独特的角色、专注领域和响应前缀。这确保每个代理都能产生独特的输出。
交互示例
You: Evaluate a mobile banking app
🔧 TECHNICAL:
• Requires robust API security: OAuth 2.0, certificate pinning, encrypted storage
• Offline mode with sync requires complex state management and conflict resolution
• Biometric auth integration varies significantly across iOS/Android platforms
💼 BUSINESS:
• Highly competitive market - need unique differentiator (neobanks, traditional banks)
• Revenue model: interchange fees, premium tiers, or lending products cross-sell
• Regulatory compliance costs significant: PCI-DSS, regional banking laws, KYC/AML
🎨 UX:
• Critical: fast task completion - check balance must be < 3 seconds
• Accessibility essential: screen reader support, high contrast mode, large touch targets
• Trust indicators important: security badges, familiar banking patterns
工作原理
┌─────────────────┐
│ User Message │
└────────┬────────┘
┌───────────────────┼───────────────────┐
↓ ↓ ↓
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Technical │ │ Business │ │ UX │
│ Analyst │ │ Analyst │ │ Analyst │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
↓ ↓ ↓
(response 1) (response 2) (response 3)
所有代理同时启动,结果在它们完成后流式传输。
LoopAgent
LoopAgent 反复运行子代理,直到满足退出条件或达到最大迭代次数。
何时使用
- 迭代细化(草稿 → 评审 → 改进 → 重复)
- 带改进的重试逻辑
- 需要多次通过的质量门
ExitLoopTool
要提前退出循环,请为 Agent 提供 ExitLoopTool。当它被调用时,会向循环发出停止信号。
完整示例
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let api_key = std::env::var("GOOGLE_API_KEY")?;
let model = Arc::new(GeminiModel::new(&api_key, "gemini-2.5-flash")?);
// Critic agent evaluates content
let critic = LlmAgentBuilder::new("critic")
.instruction("Review the content for quality. Score it 1-10 and list \
specific improvements needed. Be constructive but critical.")
.model(model.clone())
.build()?;
// Refiner agent improves based on critique
let refiner = LlmAgentBuilder::new("refiner")
.instruction("Apply the critique to improve the content. \
If the score is 8 or higher, call exit_loop to finish. \
Otherwise, provide an improved version.")
.model(model.clone())
.tool(Arc::new(ExitLoopTool::new())) // Can exit the loop
.build()?;
// Create inner sequential: critic → refiner
let critique_refine = SequentialAgent::new(
"critique_refine_step",
vec![Arc::new(critic), Arc::new(refiner)],
);
// Wrap in loop with max 3 iterations
let iterative_improver = LoopAgent::new(
"iterative_improver",
vec![Arc::new(critique_refine)],
).with_max_iterations(3)
.with_description("Critique-refine loop (max 3 passes)");
println!("🔄 Iterative Improvement Loop");
println!(" critic → refiner → repeat (max 3x or until quality >= 8)");
println!();
Launcher::new(Arc::new(iterative_improver)).run().await?;
Ok(())
}
交互示例
You: Write a tagline for a coffee shop
🔄 Iteration 1
[critic] Score: 5/10. "Good coffee here" is too generic. Needs:
- Unique value proposition
- Emotional connection
- Memorable phrasing
[refiner] Improved: "Where every cup tells a story"
🔄 Iteration 2
[critic] Score: 7/10. Better! But could be stronger:
- More action-oriented
- Hint at the experience
[refiner] Improved: "Brew your perfect moment"
🔄 Iteration 3
[critic] Score: 8/10. Strong, action-oriented, experiential.
Minor: could be more distinctive.
[refiner] Score is 8+, quality threshold met!
[exit_loop called]
Final: "Brew your perfect moment"
工作原理
┌──────────────────────────────────────────┐
│ LoopAgent │
│ ┌────────────────────────────────────┐ │
│ │ SequentialAgent │ │
│ │ ┌──────────┐ ┌──────────────┐ │ │
→ │ │ │ Critic │ → │ Refiner │ │ │
│ │ │ (评审) │ │ (改进或 │ │ │
│ │ └──────────┘ │ exit_loop) │ │ │
│ │ └──────────────┘ │ │
│ └────────────────────────────────────┘ │
│ ↑_____________↓ │
│ 重复直到退出 │
└──────────────────────────────────────────┘
ConditionalAgent (基于规则)
ConditionalAgent 根据同步的、基于规则的条件来分支执行。将其用于确定性路由,例如 A/B 测试或基于环境的路由。
ConditionalAgent::new("router", |ctx| ctx.session().state().get("premium")..., premium_agent)
.with_else(basic_agent)
注意: 对于基于 LLM 的智能路由,请改用
LlmConditionalAgent。
LlmConditionalAgent (LLM-Based)
LlmConditionalAgent 使用 LLM 对用户输入进行分类,并将其路由到适当的子代理。这非常适合需要理解内容才能做出路由决策的智能路由。
何时使用
- 意图分类 - 根据用户提问的意图进行路由
- 多路路由 - 超过 2 个目的地
- 上下文感知路由 - 需要理解,而不仅仅是关键词
完整示例
use adk_rust::prelude::*;
use adk_rust::Launcher;
use std::sync::Arc;
#[tokio::main]
async fn main() -> std::result::Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let api_key = std::env::var("GOOGLE_API_KEY")?;
let model = Arc::new(GeminiModel::new(&api_key, "gemini-2.5-flash")?);
// Create specialist agents
let tech_agent: Arc<dyn Agent> = Arc::new(
LlmAgentBuilder::new("tech_expert")
.instruction("You are a senior software engineer. Be precise and technical.")
.model(model.clone())
.build()?
);
let general_agent: Arc<dyn Agent> = Arc::new(
LlmAgentBuilder::new("general_helper")
.instruction("You are a friendly assistant. Explain simply, use analogies.")
.model(model.clone())
.build()?
);
let creative_agent: Arc<dyn Agent> = Arc::new(
LlmAgentBuilder::new("creative_writer")
.instruction("You are a creative writer. Be imaginative and expressive.")
.model(model.clone())
.build()?
);
// LLM classifies the query and routes accordingly
let router = LlmConditionalAgent::new("smart_router", model.clone())
.instruction("Classify the user's question as exactly ONE of: \
'technical' (coding, debugging, architecture), \
'general' (facts, knowledge, how-to), \
'creative' (writing, stories, brainstorming). \
Respond with ONLY the category name.")
.route("technical", tech_agent)
.route("general", general_agent.clone())
.route("creative", creative_agent)
.default_route(general_agent)
.build()?;
println!("🧠 基于 LLM 的智能路由器");
Launcher::new(Arc::new(router)).run().await?;
Ok(())
}
示例交互
You: How do I fix a borrow error in Rust?
[路由到: technical]
[代理: tech_expert]
借用错误发生在违反 Rust 所有权规则时...
You: What's the capital of France?
[路由到: general]
[代理: general_helper]
法国的首都是巴黎!这是一座美丽的城市...
You: Write me a haiku about the moon
[路由到: creative]
[代理: creative_writer]
银色圆盘高悬,
静谧波浪影婆娑—
夜晚低语着秘密。
工作原理
┌─────────────────┐
│ 用户消息 │
└────────┬────────┘
↓
┌─────────────────┐
│ LLM 分类 │ "technical" / "general" / "creative"
│ (smart_router)│
└────────┬────────┘
↓
┌────┴────┬──────────┐
↓ ↓ ↓
┌───────┐ ┌───────┐ ┌─────────┐
│ 技术 │ │通用 │ │创意 │
│专家 │ │助手 │ │作家 │
└───────┘ └───────┘ └─────────┘
组合工作流 Agent
工作流 Agent 可以嵌套以实现复杂模式。
Sequential + Parallel + Loop
use adk_rust::prelude::*;
use std::sync::Arc;
// 1. Parallel analysis from multiple perspectives
let parallel_analysis = ParallelAgent::new(
"multi_analysis",
vec![Arc::new(tech_analyst), Arc::new(biz_analyst)],
);
// 2. Synthesize the parallel results
let synthesizer = LlmAgentBuilder::new("synthesizer")
.instruction("Combine all analyses into a unified recommendation.")
.model(model.clone())
.build()?;
// 3. Quality loop: critique and refine
let quality_loop = LoopAgent::new(
"quality_check",
vec![Arc::new(critic), Arc::new(refiner)],
).with_max_iterations(2);
// Final pipeline: parallel → synthesize → quality loop
let full_pipeline = SequentialAgent::new(
"full_analysis_pipeline",
vec![
Arc::new(parallel_analysis),
Arc::new(synthesizer),
Arc::new(quality_loop),
],
);
追踪工作流执行
要查看工作流内部发生的情况,请启用追踪:
use adk_rust::prelude::*;
use adk_rust::runner::{Runner, RunnerConfig};
use adk_rust::futures::StreamExt;
use std::sync::Arc;
// Create pipeline as before...
// Use Runner instead of Launcher for detailed control
let session_service = Arc::new(InMemorySessionService::new());
let runner = Runner::new(RunnerConfig {
app_name: "workflow_trace".to_string(),
agent: Arc::new(pipeline),
session_service: session_service.clone(),
artifact_service: None,
memory_service: None,
run_config: None,
})?;
let session = session_service.create(CreateRequest {
app_name: "workflow_trace".to_string(),
user_id: "user".to_string(),
session_id: None,
state: Default::default(),
}).await?;
let mut stream = runner.run(
"user".to_string(),
session.id().to_string(),
Content::new("user").with_text("Analyze Rust"),
).await?;
// Process each event to see workflow execution
while let Some(event) = stream.next().await {
let event = event?;
// Show which agent is responding
println!("📍 Agent: {}", event.author);
// Show the response content
if let Some(content) = event.content() {
for part in &content.parts {
if let Part::Text { text } = part {
println!(" {}", text);
}
}
}
println!();
}
API 参考
SequentialAgent
SequentialAgent::new("name", vec![agent1, agent2, agent3])
.with_description("Optional description")
.before_callback(callback) // Called before execution
.after_callback(callback) // Called after execution
ParallelAgent
ParallelAgent::new("name", vec![agent1, agent2, agent3])
.with_description("Optional description")
.before_callback(callback)
.after_callback(callback)
LoopAgent
LoopAgent::new("name", vec![agent1, agent2])
.with_max_iterations(5) // Safety limit (recommended)
.with_description("Optional description")
.before_callback(callback)
.after_callback(callback)
ConditionalAgent
ConditionalAgent::new("name", |ctx| condition_fn, if_agent)
.with_else(else_agent) // Optional else branch
.with_description("Optional description")
ExitLoopTool
// Add to an agent to let it exit a LoopAgent
.tool(Arc::new(ExitLoopTool::new()))
上一页: LlmAgent | 下一页: 多 Agent 系统 →