This is part 4 of the AI Recruiting Pipeline Epic.
Static message templates don't scale. "Hi {first_name}, check out this job!" works once. The second time feels robotic. The tenth time, the candidate unsubscribes.
Effective outreach requires context:
- What have we already discussed?
- What are their preferences?
- What jobs match those preferences?
- What's the right next step in this conversation?
This is where LLMs shine — generating personalized messages that feel human because they consider the full context.
The Architecture
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Lead │ → │ Context │ → │ LLM │
│ Record │ │ Builder │ │ Call │
└─────────────┘ └─────────────┘ └─────────────┘
↓ ↓ ↓
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Preferences │ │ Matching │ │ Suggested │
│ + History │ │ Jobs │ │ Response │
└─────────────┘ └─────────────┘ └─────────────┘
The agent:
1. Gathers context from the lead record
2. Fetches conversation history and matching jobs
3. Calls the LLM with structured context
4. Creates a SuggestedResponse for human review
The Conversation Agent
class ConversationAgent
attr_reader :lead
def initialize(lead)
@lead = lead
end
def generate_suggestion
context = build_context
suggested_body = generate_message(context)
SuggestedResponse.create!(
lead:,
suggested_body:,
prompt_context: context
)
end
private
def generate_message(context)
response = client.chat(
parameters: {
model: "gpt-4o",
messages: [
{ role: "system", content: system_prompt },
{ role: "user", content: user_prompt(context) }
],
max_tokens: 300,
temperature: 0.7
}
)
response.dig("choices", 0, "message", "content")&.strip || fallback_message
rescue StandardError => e
Rails.logger.error "[ConversationAgent] LLM error: #{e.message}"
fallback_message
end
end
The agent outputs a SuggestedResponse — never sends directly. Humans review before sending.
The System Prompt
The system prompt defines personality and priorities:
def system_prompt
<<~PROMPT
You are a friendly healthcare recruiting assistant. Your communication style is:
- Warm, professional, and concise (SMS should be brief)
- Helpful without being pushy
- Focused on understanding needs and matching with suitable positions
Your goals in priority order:
1. If new candidate: Welcome them and ask about preferences
2. If preferences unknown: Gather key info (location, shift, hours)
3. If preferences known: Recommend matching jobs with brief descriptions
4. If job shared: Follow up on interest
5. If applied: Check on status and offer support
Keep messages under 160 characters when possible (SMS limit).
Never make up information — only reference provided data.
Include first name when natural.
PROMPT
end
Priority ordering tells the model what matters most. Character limits constrain output length.
Building Context
The user prompt provides all available context:
def build_context
{
lead_name: lead.full_name,
status: determine_status,
has_preferences: lead.has_preferences?,
preferences: lead.has_preferences? ? build_preferences : {},
conversation_history: build_conversation_history,
job_matches: lead.has_preferences? ? JobMatcher.for_lead(lead, limit: 3) : []
}
end
def user_prompt(context)
parts = ["Generate an appropriate SMS for this candidate:\n"]
parts << "## Candidate Info"
parts << "Name: #{context[:lead_name]}"
parts << "Status: #{context[:status]}"
if context[:has_preferences]
parts << "\n## Preferences"
context[:preferences].each { |k, v| parts << "- #{k}: #{v}" }
end
if context[:conversation_history].present?
parts << "\n## Recent Messages (newest first)"
context[:conversation_history].each do |msg|
direction = msg[:incoming] ? "THEM" : "US"
parts << "[#{direction}] #{msg[:body]}"
end
end
if context[:job_matches].present?
parts << "\n## Matching Jobs"
context[:job_matches].each do |match|
parts << "- #{match.name} (#{match.shift} shift, #{match.hours}h/wk)"
end
end
parts << "\n## Your Task"
parts << task_instruction(context)
parts.join("\n")
end
Structured sections help the model parse context. Recent conversation history (limited to 10 messages) prevents token explosion.
State-Based Task Instructions
Different conversation states need different approaches:
def task_instruction(context)
case context[:status]
when "new"
"Welcome this new candidate and ask about their job preferences."
when "awaiting_preferences"
"Continue gathering preferences. Focus on what we don't know yet."
when "has_preferences"
if context[:job_matches].present?
"Share 1-2 matching jobs. Be specific about why they match."
else
"Acknowledge preferences and let them know we're looking for matches."
end
when "job_shared"
"Follow up on shared jobs. Ask if they'd like help applying."
else
"Continue naturally based on context."
end
end
State detection uses heuristics:
def determine_status
return "new" if conversation.messages_count.zero?
return "awaiting_preferences" unless lead.has_preferences?
recent = conversation.messages.outgoing.recent.limit(5)
return "job_shared" if recent.any? { |m| m.body&.include?("apply") }
"has_preferences"
end
Human Review
The agent creates suggestions, not sent messages:
class SuggestedResponse < ApplicationRecord
belongs_to :lead
enum :status, { pending: 0, approved: 1, rejected: 2, edited: 3 }
def approve_and_send!
transaction do
update!(status: :approved, approved_at: Time.current)
MessageSender.new(lead, body: suggested_body).send!
end
end
def edit_and_send!(new_body)
transaction do
update!(status: :edited, final_body: new_body, approved_at: Time.current)
MessageSender.new(lead, body: new_body).send!
end
end
end
Recruiters see suggestions in a queue, approve/edit/reject. Edited messages train future prompts.
Fallback Messages
When LLM calls fail, use rule-based fallbacks:
def fallback_message
case determine_status
when "new"
"Hi #{lead.first_name}! Thanks for your interest. What type of position are you looking for?"
when "awaiting_preferences"
"What shift works best for you? Days, evenings, or nights?"
else
"Hi #{lead.first_name}, checking in! Any updates on your job search?"
end
end
Fallbacks ensure the system degrades gracefully during API outages.
Measuring Quality
Track approval rates by message type:
SuggestedResponse
.group(:status)
.count
# => { "approved" => 142, "edited" => 38, "rejected" => 12 }
High rejection rates signal prompt problems. Edited messages show what humans prefer.
The Takeaway
LLM conversation agents work when they:
- Gather rich context — Conversation history, preferences, available data
- Use structured prompts — Clear sections, explicit task instructions
- Keep humans in the loop — Suggestions for review, not auto-send
- Fail gracefully — Rule-based fallbacks for API failures
- Learn from edits — Track what humans change to improve prompts
The result is personalized outreach that scales — generated in milliseconds, approved in seconds, effective because it considers context.