Deliberative Agent Pattern
The Deliberative Agent Pattern creates agents that plan and reason about their actions before execution, using internal models to achieve long-term goals.
Pattern Overview
This pattern implements agents that maintain internal world models, plan sequences of actions, and deliberate about the best course of action to achieve their objectives.
Structure
use std::collections::{HashMap, VecDeque};
// World state representation
#[derive(Debug, Clone)]
pub struct WorldState {
pub variables: HashMap<String, f64>,
pub timestamp: u64,
}
// Goal definition
#[derive(Debug, Clone)]
pub struct Goal {
pub name: String,
pub target_state: HashMap<String, f64>,
pub priority: u8,
pub deadline: Option<u64>,
}
// Action with preconditions and effects
#[derive(Debug, Clone)]
pub struct Action {
pub name: String,
pub preconditions: HashMap<String, f64>,
pub effects: HashMap<String, f64>,
pub cost: f64,
}
// Plan step
#[derive(Debug, Clone)]
pub struct PlanStep {
pub action: Action,
pub expected_state: WorldState,
}
// Deliberative agent implementation
pub struct DeliberativeAgent {
id: String,
world_model: WorldState,
goals: Vec<Goal>,
available_actions: Vec<Action>,
current_plan: Option<Vec<PlanStep>>,
}
impl DeliberativeAgent {
pub fn new(id: &str) -> Self {
Self {
id: id.to_string(),
world_model: WorldState {
variables: HashMap::new(),
timestamp: 0,
},
goals: Vec::new(),
available_actions: Vec::new(),
current_plan: None,
}
}
pub fn update_world_model(&mut self, observations: HashMap<String, f64>) {
for (key, value) in observations {
self.world_model.variables.insert(key, value);
}
self.world_model.timestamp += 1;
}
pub fn add_goal(&mut self, goal: Goal) {
self.goals.push(goal);
// Re-plan when new goal is added
self.replan();
}
pub fn add_action(&mut self, action: Action) {
self.available_actions.push(action);
}
// Simple planning algorithm (simplified)
fn plan_to_goal(&self, goal: &Goal) -> Option<Vec<PlanStep>> {
let mut plan = Vec::new();
let mut current_state = self.world_model.clone();
// Find actions that can achieve the goal
for action in &self.available_actions {
if self.action_applicable(&action, ¤t_state) {
let new_state = self.apply_action(&action, ¤t_state);
if self.goal_achieved(goal, &new_state) {
plan.push(PlanStep {
action: action.clone(),
expected_state: new_state,
});
return Some(plan);
}
}
}
None
}
fn action_applicable(&self, action: &Action, state: &WorldState) -> bool {
for (var, required_value) in &action.preconditions {
if let Some(current_value) = state.variables.get(var) {
if (current_value - required_value).abs() > 0.1 {
return false;
}
} else {
return false;
}
}
true
}
fn apply_action(&self, action: &Action, state: &WorldState) -> WorldState {
let mut new_state = state.clone();
for (var, effect) in &action.effects {
new_state.variables.insert(var.clone(), *effect);
}
new_state.timestamp += 1;
new_state
}
fn goal_achieved(&self, goal: &Goal, state: &WorldState) -> bool {
for (var, target_value) in &goal.target_state {
if let Some(current_value) = state.variables.get(var) {
if (current_value - target_value).abs() > 0.1 {
return false;
}
} else {
return false;
}
}
true
}
pub fn replan(&mut self) {
// Find highest priority goal
if let Some(goal) = self.goals.iter().max_by_key(|g| g.priority) {
self.current_plan = self.plan_to_goal(goal);
}
}
pub fn execute_next_action(&mut self) -> Option<String> {
if let Some(plan) = &mut self.current_plan {
if let Some(step) = plan.first() {
let action_name = step.action.name.clone();
self.world_model = step.expected_state.clone();
plan.remove(0);
if plan.is_empty() {
self.current_plan = None;
}
return Some(action_name);
}
}
None
}
}Usage Example
let mut agent = DeliberativeAgent::new("PlanningAgent");
// Add available actions
agent.add_action(Action {
name: "collect_data".to_string(),
preconditions: HashMap::from([("system_ready".to_string(), 1.0)]),
effects: HashMap::from([("data_collected".to_string(), 1.0)]),
cost: 1.0,
});
agent.add_action(Action {
name: "analyze_data".to_string(),
preconditions: HashMap::from([("data_collected".to_string(), 1.0)]),
effects: HashMap::from([("analysis_complete".to_string(), 1.0)]),
cost: 2.0,
});
// Set initial world state
agent.update_world_model(HashMap::from([("system_ready".to_string(), 1.0)]));
// Add goal
agent.add_goal(Goal {
name: "complete_analysis".to_string(),
target_state: HashMap::from([("analysis_complete".to_string(), 1.0)]),
priority: 10,
deadline: None,
});
// Execute planned actions
while let Some(action) = agent.execute_next_action() {
println!("Executing action: {}", action);
}Benefits
- Goal-Oriented: Systematic pursuit of objectives
- Optimal Planning: Can find efficient action sequences
- Predictive: Reasons about future states
- Flexible: Adapts plans when conditions change
Use Cases
- Long-term strategic planning
- Resource optimization and scheduling
- Complex workflow orchestration
- Multi-step problem solving