- Published on
What is an AI Harness: Skills, Context, Hooks, and Permissions — The Orchestration Architecture Controlling AI
- Authors

- Name
- Youngju Kim
- @fjvbn20031
- 1. What Is an AI Harness? — Why a Raw LLM Is Not Enough
- 2. The 7 Building Blocks of a Harness
- 3. Claude Code Harness Architecture Deep Dive
- 4. Building a Custom Harness with the Claude Agent SDK
- 5. Comparing AI Harness Frameworks
- 6. Harness Design Patterns
- 7. Hands-On: Building Your Own Code Review Harness
- 8. Harness Evaluation and Monitoring
- 9. 2025-2026 Harness Trends
- 10. Quiz
- 11. References
1. What Is an AI Harness? — Why a Raw LLM Is Not Enough
The Limits of a Raw LLM
Large language models like GPT-4o, Claude Sonnet, and Gemini Pro possess remarkable capabilities. Yet a model by itself cannot produce a practical AI system.
Ask a raw LLM to "review our project's PR" and here is what happens:
- It cannot read the project code (no tools)
- It does not know which coding conventions to follow (no context)
- It cannot run git diff (no permissions)
- It does not remember previous review history (no memory)
- It cannot post comments automatically after the review (no workflow)
An LLM is a powerful engine, but an engine alone is not a car.
Putting a Harness on a Horse
The word "harness" originally refers to the gear placed on a horse. A wild horse has tremendous power, but without a harness you cannot direct that power where you need it. With a harness, the horse can pull a carriage, plow a field, or haul cargo.
An AI Harness follows the same principle:
- Wild horse = Raw LLM (GPT-4, Claude, Gemini)
- Harness = AI Harness (system prompt, tools, permissions, skills, hooks)
- Carriage / field = Real-world tasks (code review, data analysis, customer support)
A well-designed harness maximizes the horse's strength. A poorly designed harness causes the horse to bolt in the wrong direction — or worse, to do something dangerous.
Raw LLM vs Harnessed LLM
| Aspect | Raw LLM | Harnessed LLM |
|---|---|---|
| Tool access | None | File I/O, API calls, DB queries |
| Context | Conversation only | Project structure, codebase, docs |
| Permissions | Unlimited (or none at all) | Fine-grained access control |
| Memory | Current session only | Long-term memory, project history |
| Workflow | None | Skills, hooks, pipelines |
| Safety | Model-inherent safety only | Guardrails, permissions model, I/O validation |
| Consistency | Depends on prompt | Stabilized by system prompt |
The Paradigm Shift of 2025 AI Engineering
Until 2023, the core of AI engineering was model training — building bigger, smarter models.
By 2025, the center of gravity has shifted completely: model orchestration is the new core.
Why:
- Foundation models are powerful enough — Claude Sonnet 4 and GPT-4o have sufficient intelligence for most tasks
- Differentiation happens in the harness — the same model yields vastly different results depending on harness design
- Enterprise requirements — security, auditing, permission management, and cost tracking are now essential
- Rise of autonomous agents — Devin, Claude Code, and GitHub Copilot Agent all use sophisticated harnesses
The AI Harness is the single most important concept in 2025-2026 AI engineering.
2. The 7 Building Blocks of a Harness
An AI Harness is composed of seven core building blocks. Let us examine each one in depth.
2-1. System Prompt (The Ground Rules)
The system prompt is the AI's constitution. It defines the role, rules, and constraints at the highest level.
Components of a System Prompt
- Role definition: What the AI is and what expertise it possesses
- Behavioral rules: How it should respond and act
- Constraints: What it must never do
- Output format: The shape and structure of responses
- Safety rules: Security, privacy, and copyright guidelines
Claude Code's System Prompt Structure
Claude Code uses a highly sophisticated system prompt:
You are Claude Code, Anthropic's official CLI for Claude.
Given the user's message, you should use the tools available
to complete the task.
Your strengths:
- Searching for code across large codebases
- Analyzing multiple files to understand architecture
- Investigating complex questions
- Performing multi-step research tasks
Guidelines:
- For file searches: search broadly when you don't know
where something lives
- NEVER create files unless absolutely necessary
- ALWAYS prefer editing existing files
The key insight: role (CLI tool), strengths (code search, analysis), and rules (minimize file creation) are clearly separated.
System Prompt Best Practices
# Structure of a good system prompt
## 1. Role Declaration (Who)
You are a senior backend engineer.
You specialize in Java/Spring Boot with 10+ years of experience.
## 2. Behavioral Rules (How)
- Check for security vulnerabilities first during code review
- Point out performance issues with Big-O analysis
- Always include code examples with suggestions
## 3. Constraints (Boundaries)
- Do not modify code directly; only suggest
- Do not recommend adding external libraries
- Follow existing architecture patterns
## 4. Output Format (Format)
Present review results in this format:
- [Severity]: Description + Suggestion
Practical Examples by Role
Code Reviewer:
You are a senior code reviewer specializing in security
and performance. Review every PR with these priorities:
1. Security vulnerabilities (SQL injection, XSS, CSRF)
2. Performance bottlenecks (N+1 queries, memory leaks)
3. Code maintainability (naming, structure, SOLID)
Never approve code with critical security issues.
Data Analyst:
You are a data analyst with expertise in Python, SQL,
and statistical analysis. When analyzing data:
1. Always validate data quality first
2. Provide confidence intervals for estimates
3. Visualize results with appropriate charts
4. Explain findings in business-friendly language
2-2. Tools
Tools are the interfaces through which AI interacts with the external world. An LLM by itself can only generate text; tools let it read files, execute commands, and call APIs.
How Function Calling Works
The tool-use lifecycle:
- The user sends a request
- The AI decides which tool to use (reasoning)
- The AI generates a tool call (JSON format)
- The harness executes the tool
- The result is returned to the AI
- The AI interprets the result and decides the next action
Tool Definition Structure
Tools are defined with JSON Schema:
{
"name": "read_file",
"description": "Reads a file from the local filesystem. Supports text files, images, and PDFs.",
"input_schema": {
"type": "object",
"properties": {
"file_path": {
"type": "string",
"description": "The absolute path to the file to read"
},
"offset": {
"type": "number",
"description": "The line number to start reading from"
},
"limit": {
"type": "number",
"description": "The number of lines to read"
}
},
"required": ["file_path"]
}
}
Key principles:
- Name: verb + noun (read_file, search_code, run_command)
- Description: clear enough for the AI to know when to use the tool
- Parameters: strongly typed with JSON Schema, required/optional clearly defined
Core Tools in Claude Code
| Tool | Purpose | Example |
|---|---|---|
| Bash | Execute shell commands | git status, npm test |
| Read | Read files | Source code, config files |
| Write | Create files | New file creation |
| Edit | Modify files | Change existing code |
| Grep | Pattern search | Search across the codebase |
| Glob | File finder | Filename pattern matching |
| WebSearch | Web search | Latest docs, API references |
| WebFetch | Fetch web pages | Read URL content |
Extending Tools with MCP (Model Context Protocol)
MCP is a standard protocol that lets AI agents access external tools and data sources. Just as USB-C connects diverse devices through a single interface, MCP connects diverse services to AI.
{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "ghp_xxxxxxxxxxxx"
}
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": {
"DATABASE_URL": "postgresql://user:pass@localhost/db"
}
}
}
}
Adding MCP servers gives the AI new capabilities: managing GitHub issues, querying databases, sending Slack messages, and more.
The 5 Principles of Tool Design
- Single responsibility: One tool, one job
- Clear description: The AI must understand when to use it
- Strongly typed parameters: Strict validation via JSON Schema
- Failure handling: Return meaningful error messages on failure
- Minimize side effects: Reads are safe; writes require confirmation
2-3. Context
Context is all the information the AI needs to understand the current situation. The same question can demand entirely different answers depending on context.
CLAUDE.md: Project-Level Instructions
CLAUDE.md is the project's user manual for the AI. It defines how the AI should behave in that specific project.
# Project Guidelines
## Build Commands
- npm run dev: start development server
- npm run build: production build
- npm test: run tests
## Code Conventions
- TypeScript strict mode
- Functional components only (no class components)
- Tailwind CSS instead of CSS-in-JS
## Architecture
- src/components: reusable UI components
- src/hooks: custom hooks
- src/lib: utility functions
- src/app: Next.js App Router pages
CLAUDE.md files are loaded hierarchically:
- Home directory CLAUDE.md (global settings)
- Project root CLAUDE.md (project settings)
- Sub-directory CLAUDE.md (module-level settings)
Lower-level settings override higher-level ones.
Codebase Indexing
An AI harness indexes the project's file structure, dependencies, and architecture:
Project structure analysis:
- package.json: dependencies, scripts
- tsconfig.json: TypeScript configuration
- .eslintrc: code style rules
- .gitignore: excluded file patterns
- Directory structure: infer architecture patterns
Memory: Cross-Session Persistence
The .remember/ directory is the AI's long-term memory store:
.remember/
core-memories.md # Project essentials
now.md # Current work state
today.md # Today's activity log
recent.md # Recent work history
archive.md # Old records archive
Context Window Management Strategies
Claude's context window is 200K tokens (up to 1M), but efficient management is critical:
- Summarization: Extract only the essentials from long files
- Chunking: Selectively load only the needed portions
- Prioritization: Favor information relevant to the current task
- Dynamic loading: Fetch additional context on demand
- Sub-agent delegation: Process independent tasks in separate contexts
2-4. Skills
Skills are bundles of reusable domain knowledge and workflows. Similar to prompt templates, but richer: they include trigger conditions, tool-usage directives, and step-by-step workflows.
SKILL.md File Structure
---
trigger: 'when user asks to review a PR'
description: 'Comprehensive code review workflow'
---
# Code Review Skill
## Step 1: Gather Information
- Read the PR description and diff
- Identify changed files and their purposes
- Check the PR against project conventions (CLAUDE.md)
## Step 2: Security Review
- Check for SQL injection, XSS, CSRF vulnerabilities
- Verify input validation on all user inputs
- Ensure secrets are not hardcoded
## Step 3: Performance Review
- Identify N+1 query patterns
- Check for unnecessary re-renders (React)
- Verify proper indexing for DB queries
## Step 4: Generate Review
- Use structured format: severity + description + suggestion
- Include code examples for each suggestion
- Summarize with overall assessment
Built-in Skills in Claude Code
Claude Code ships with several built-in skills:
- commit: check git status, analyze changes, write commit message, execute commit
- review-pr: analyze PR, code review, write comments
- test-driven-development: write test first, implement, refactor
- brainstorming: diverge ideas, structure, prioritize
Skills vs Prompt Templates
| Aspect | Prompt Template | Skill |
|---|---|---|
| Trigger | Manual selection | Automatic detection |
| Tool usage | None | Tool-use directives included |
| Workflow | Single prompt | Multi-step workflow |
| Context | Static | Dynamic context loading |
| Learning | None | Can incorporate result feedback |
Skill Authoring Guide
Core principles for writing great skills:
- Clear trigger: Precisely define when the skill activates
- Step-by-step decomposition: Break complex tasks into small steps
- Tool mapping: Specify which tool is used at each step
- Exception handling: Define failure scenarios and fallback paths
- Output format: Concretely define the shape of the deliverable
2-5. Hooks
Hooks are scripts that run automatically before or after AI tool use. Like Git's pre-commit and post-commit hooks, they add automated validation and processing to AI actions.
The 4 Types of Hooks
-
PreToolUse: Runs before tool execution
- Purpose: input validation, permission checks, parameter transformation
- Example: validate file path before a Write call
-
PostToolUse: Runs after tool execution
- Purpose: result validation, formatting, linting, testing
- Example: run ESLint automatically after saving a file
-
Notification: Runs before a notification is sent
- Purpose: notification content processing, filtering
-
Stop: Runs when the agent stops
- Purpose: cleanup tasks, state persistence
Configuration Example: settings.json
{
"hooks": {
"PreToolUse": [
{
"matcher": "Write|Edit",
"command": "echo 'File modification detected'"
}
],
"PostToolUse": [
{
"matcher": "Write|Edit",
"command": "/bin/sh -c 'cd PROJECT_DIR && npx eslint --fix FILE_PATH 2>/dev/null || true'"
},
{
"matcher": "Bash",
"command": "/bin/sh -c 'if echo TOOL_INPUT | grep -q \"git commit\"; then cd PROJECT_DIR && npm test; fi'"
}
],
"Notification": [
{
"matcher": "",
"command": "/bin/sh -c 'echo NOTIFICATION_CONTENT >> /tmp/ai-notifications.log'"
}
]
}
}
Real-World Hook Scenarios
Scenario 1: Auto-format after file save
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write",
"command": "/bin/sh -c 'npx prettier --write FILE_PATH'"
}
]
}
}
Scenario 2: Auto-test before commit
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"command": "/bin/sh -c 'if echo TOOL_INPUT | grep -q \"git commit\"; then npm test || exit 1; fi'"
}
]
}
}
Scenario 3: Block dangerous commands
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"command": "/bin/sh -c 'if echo TOOL_INPUT | grep -qE \"rm -rf|drop table|format\"; then echo \"BLOCKED: Dangerous command detected\" && exit 1; fi'"
}
]
}
}
2-6. Permissions
The permissions model is the security layer that defines the scope of actions the AI can perform. It applies the Principle of Least Privilege to AI.
Risk Classification of Actions
| Risk Level | Action Type | Examples | Policy |
|---|---|---|---|
| Safe | Read | File reading, search, lookup | Auto-allow |
| Caution | Write | File modification, creation | Allow after confirmation |
| Dangerous | Delete/Execute | File deletion, deployment | Explicit approval required |
| Forbidden | System change | OS settings, account changes | Always blocked |
allowedTools and disallowedTools
{
"permissions": {
"allowedTools": [
"Read",
"Glob",
"Grep",
"Bash(git status)",
"Bash(git diff)",
"Bash(npm test)"
],
"disallowedTools": [
"Bash(rm *)",
"Bash(sudo *)",
"Bash(curl * | sh)",
"Write(/etc/*)",
"Write(/usr/*)"
]
}
}
Sandbox Mode vs Agent Mode
Sandbox Mode:
- Network access blocked
- File system read-only
- Shell commands restricted
- Safe experimentation environment
Agent Mode:
- Network access allowed
- File system read/write
- Shell commands allowed (with pattern-based filtering)
- Real-work execution environment
Applying the Principle of Least Privilege
Steps to apply least privilege to an AI agent:
- Provide only needed tools: A code review agent does not need deployment tools
- Limit file scope: Block access outside the project directory
- Whitelist commands: Only allowed shell commands can run
- Time limits: Prevent long-running executions
- Cost limits: Set a maximum token budget
2-7. Memory
Memory is the mechanism that lets the AI retain past interactions and learned knowledge.
Short-Term vs Long-Term Memory
Short-Term Memory:
- The current conversation session context
- All messages within the context window
- Disappears when the session ends
Long-Term Memory:
- Stored as files in the
.remember/directory - Persists across sessions
- Project-level or user-level
Memory File Structure
# core-memories.md
- This project uses Next.js 14 + TypeScript + Tailwind CSS
- Deployed on Vercel
- Database is PostgreSQL (Supabase)
# now.md
Current task: refactoring the user authentication module
Progress: migrating from NextAuth.js to Lucia Auth
Blocker: social login callback URL configuration issue
# today.md
- 09:30 Completed authentication module analysis
- 10:15 Created Lucia Auth configuration files
- 11:00 Migrating session management logic
- 14:00 Started social login integration
Episodic Memory vs Semantic Memory
Episodic Memory (event-based):
- "Yesterday the user asked me to fix an auth bug"
- "Last week we changed the database schema"
- Contains specific events and timestamps
Semantic Memory (knowledge-based):
- "This project uses a REST API"
- "The team follows Airbnb coding conventions"
- Contains general facts and rules
Memory Management Strategies
- Auto-summarization: Extract key points from long conversations
- Importance-based pruning: Keep frequently referenced memories, archive the rest
- Hierarchical structure: Core memories > Recent memories > Archive
- Conflict resolution: Update when new information contradicts existing memories
- Privacy: Encrypt or exclude sensitive information
3. Claude Code Harness Architecture Deep Dive
3-1. The Agent Loop
The heart of Claude Code is the agent loop. Rather than simple question-answer, it is an iterative cycle of using tools, observing results, and deciding the next action.
Agent Loop Flow:
1. User Input
|
2. Load System Prompt + Context
|
3. LLM Reasoning (decide what action to take)
|
4. Decide Tool Call
|
5. Execute Hook (PreToolUse)
| (on validation failure -> return to step 3)
|
6. Execute Tool
|
7. Execute Hook (PostToolUse)
|
8. Observe Result
|
9. More actions needed? -> Yes -> return to step 3
| No
|
10. Generate Final Response
This loop continues until the max_turns limit is reached or the AI judges the task complete.
Practical Example: PR Review Loop
Turn 1: Run git diff (Bash tool)
-> Obtain list of changed files and diffs
Turn 2: Read changed files (Read tool)
-> Understand full context
Turn 3: Read CLAUDE.md (Read tool)
-> Check project conventions
Turn 4: Find related test files (Grep tool)
-> Verify test coverage
Turn 5: Run npm test (Bash tool)
-> Confirm tests pass
Turn 6: Generate review result (final response)
-> Review from security, performance, maintainability perspectives
3-2. Sub-Agent Architecture
Complex tasks are hard for a single agent to handle. Claude Code uses a sub-agent architecture.
Main Agent and Sub-Agents
Main Agent (Orchestrator)
|
+--- Sub-Agent A: "Analyze src/ directory"
| (independent context, file read/search tools)
|
+--- Sub-Agent B: "Analyze tests/ directory"
| (independent context, test execution tools)
|
+--- Sub-Agent C: "Update documentation"
(independent context, file write tools)
Main Agent: integrate results from all sub-agents into the final response
Key characteristics of sub-agents:
- Independent context: Does not pollute the main agent's context
- Parallel execution: Independent tasks can run concurrently
- Result integration: The main agent collects and synthesizes results
Background Agent vs Foreground Agent
- Foreground Agent: The main agent that directly converses with the user
- Background Agent: An asynchronous agent that works independently
Background Agent use cases:
- Large-scale refactoring
- Running tests across many files
- Automatic documentation generation
Worktree Isolation
Isolation via Git worktrees:
Main project (main branch)
|
+--- .claude/worktrees/feature-auth/
| (independent branch, independent working directory)
|
+--- .claude/worktrees/fix-bug-123/
(independent branch, independent working directory)
Each worktree has an independent file system, ensuring that tasks do not interfere with one another.
3-3. settings.json In-Depth
Claude Code configuration is managed through settings.json.
Global vs Project Settings
Settings locations (in priority order):
1. project/.claude/settings.json (highest priority)
2. home-directory/.claude/settings.json (global)
3. Defaults (lowest priority)
Full Settings Structure
{
"permissions": {
"allowedTools": ["Read", "Glob", "Grep"],
"disallowedTools": ["Bash(rm *)"]
},
"hooks": {
"PreToolUse": [],
"PostToolUse": [],
"Notification": [],
"Stop": []
},
"env": {
"NODE_ENV": "development",
"DEBUG": "true"
},
"model": "claude-sonnet-4-20250514",
"theme": "dark"
}
Settings Priority
Highest <- Project settings <- Global settings <- Defaults -> Lowest
Project settings override global settings, allowing different permissions and hooks per project.
4. Building a Custom Harness with the Claude Agent SDK
4-1. Python SDK
Anthropic's Claude Agent SDK is a framework for building custom AI agents.
Basic Agent Creation
import anthropic
from typing import Any
# Initialize the Anthropic client
client = anthropic.Anthropic()
# Define tools
tools = [
{
"name": "read_file",
"description": "Read the contents of a file at the given path",
"input_schema": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Absolute file path to read"
}
},
"required": ["path"]
}
},
{
"name": "list_directory",
"description": "List files and directories at the given path",
"input_schema": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Directory path to list"
}
},
"required": ["path"]
}
},
{
"name": "search_code",
"description": "Search for a pattern in the codebase",
"input_schema": {
"type": "object",
"properties": {
"pattern": {
"type": "string",
"description": "Regex pattern to search for"
},
"file_type": {
"type": "string",
"description": "File extension filter (e.g., py, ts)"
}
},
"required": ["pattern"]
}
}
]
def execute_tool(name: str, input_data: dict) -> str:
"""Execute a tool and return the result."""
import os
import subprocess
if name == "read_file":
try:
with open(input_data["path"], "r") as f:
return f.read()
except FileNotFoundError:
return f"Error: File not found: {input_data['path']}"
elif name == "list_directory":
try:
entries = os.listdir(input_data["path"])
return "\n".join(entries)
except FileNotFoundError:
return f"Error: Directory not found: {input_data['path']}"
elif name == "search_code":
try:
cmd = ["grep", "-rn", input_data["pattern"], "."]
if "file_type" in input_data:
cmd.extend(["--include", f"*.{input_data['file_type']}"])
result = subprocess.run(cmd, capture_output=True, text=True)
return result.stdout or "No matches found"
except Exception as e:
return f"Error: {str(e)}"
return f"Unknown tool: {name}"
def run_agent(user_message: str, max_turns: int = 10) -> str:
"""Run the agent loop."""
system_prompt = """You are an expert code reviewer.
Analyze code for security vulnerabilities, performance issues,
and maintainability problems. Use the provided tools to
explore the codebase before giving your review."""
messages = [{"role": "user", "content": user_message}]
for turn in range(max_turns):
# Call the LLM
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=4096,
system=system_prompt,
tools=tools,
messages=messages,
)
# Process the response
if response.stop_reason == "end_turn":
for block in response.content:
if hasattr(block, "text"):
return block.text
return "Agent completed without text response"
# Handle tool calls
if response.stop_reason == "tool_use":
messages.append({
"role": "assistant",
"content": response.content,
})
tool_results = []
for block in response.content:
if block.type == "tool_use":
result = execute_tool(block.name, block.input)
tool_results.append({
"type": "tool_result",
"tool_use_id": block.id,
"content": result,
})
messages.append({
"role": "user",
"content": tool_results,
})
return "Agent reached maximum turns without completing"
# Run
if __name__ == "__main__":
review = run_agent("Review the authentication module in src/auth/")
print(review)
Adding a Permissions Model
class PermissionModel:
"""Manage permissions for the AI agent."""
def __init__(self):
self.allowed_paths = ["/project/src/", "/project/tests/"]
self.blocked_commands = ["rm", "sudo", "chmod"]
self.max_file_size = 1_000_000 # 1MB
def check_file_access(self, path: str) -> bool:
"""Check file access permission."""
return any(path.startswith(p) for p in self.allowed_paths)
def check_command(self, command: str) -> bool:
"""Check command execution permission."""
return not any(cmd in command for cmd in self.blocked_commands)
def validate_tool_call(self, tool_name: str, input_data: dict) -> tuple:
"""Validate a tool call.
Returns (is_allowed, reason)"""
if tool_name == "read_file":
path = input_data.get("path", "")
if not self.check_file_access(path):
return False, f"Access denied: {path}"
if tool_name == "execute_command":
cmd = input_data.get("command", "")
if not self.check_command(cmd):
return False, f"Blocked command: {cmd}"
return True, "Allowed"
Event Handling and Monitoring
import time
from dataclasses import dataclass, field
from typing import Optional
@dataclass
class AgentEvent:
"""An event recording an agent action."""
event_type: str # tool_call, tool_result, llm_response, error
timestamp: float = field(default_factory=time.time)
tool_name: Optional[str] = None
input_data: Optional[dict] = None
output_data: Optional[str] = None
tokens_used: int = 0
duration_ms: float = 0
class AgentMonitor:
"""Monitor agent behavior."""
def __init__(self):
self.events: list[AgentEvent] = []
self.total_tokens = 0
self.total_cost = 0.0
def log_event(self, event: AgentEvent):
self.events.append(event)
self.total_tokens += event.tokens_used
def get_summary(self) -> dict:
return {
"total_events": len(self.events),
"total_tokens": self.total_tokens,
"tool_calls": sum(
1 for e in self.events if e.event_type == "tool_call"
),
"errors": sum(
1 for e in self.events if e.event_type == "error"
),
"total_duration_ms": sum(
e.duration_ms for e in self.events
),
}
4-2. TypeScript SDK
The same patterns apply when building a harness in TypeScript.
import Anthropic from '@anthropic-ai/sdk'
// Tool definitions
const tools: Anthropic.Tool[] = [
{
name: 'query_database',
description: 'Execute a SQL query against the analytics database',
input_schema: {
type: 'object' as const,
properties: {
query: {
type: 'string',
description: 'SQL query to execute (SELECT only)',
},
database: {
type: 'string',
description: 'Database name',
enum: ['analytics', 'users', 'products'],
},
},
required: ['query', 'database'],
},
},
{
name: 'create_chart',
description: 'Create a data visualization chart',
input_schema: {
type: 'object' as const,
properties: {
chart_type: {
type: 'string',
enum: ['bar', 'line', 'pie', 'scatter'],
},
data: {
type: 'string',
description: 'JSON string of chart data',
},
title: {
type: 'string',
description: 'Chart title',
},
},
required: ['chart_type', 'data', 'title'],
},
},
]
// Permissions model
class QueryPermissions {
private readonly blockedPatterns = [
/DROP\s/i,
/DELETE\s/i,
/UPDATE\s/i,
/INSERT\s/i,
/ALTER\s/i,
/TRUNCATE\s/i,
]
validateQuery(query: string): { allowed: boolean; reason: string } {
for (const pattern of this.blockedPatterns) {
if (pattern.test(query)) {
return {
allowed: false,
reason: 'Blocked: destructive operation detected',
}
}
}
return { allowed: true, reason: 'Query is safe' }
}
}
// Tool execution
async function executeTool(name: string, input: Record<string, unknown>): Promise<string> {
const permissions = new QueryPermissions()
if (name === 'query_database') {
const query = input.query as string
const validation = permissions.validateQuery(query)
if (!validation.allowed) {
return `Permission denied: ${validation.reason}`
}
// Simulated DB query
return JSON.stringify({
columns: ['date', 'revenue', 'users'],
rows: [
['2025-01', 150000, 12000],
['2025-02', 165000, 13500],
['2025-03', 180000, 15000],
],
})
}
if (name === 'create_chart') {
return `Chart created: ${input.title} (${input.chart_type})`
}
return `Unknown tool: ${name}`
}
// Agent loop
async function runDataAnalysisAgent(question: string): Promise<string> {
const client = new Anthropic()
const systemPrompt = `You are a data analyst agent.
Analyze data by querying databases and creating visualizations.
Always validate data before making conclusions.
Provide insights in clear, business-friendly language.`
const messages: Anthropic.MessageParam[] = [{ role: 'user', content: question }]
const maxTurns = 10
for (let turn = 0; turn < maxTurns; turn++) {
const response = await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 4096,
system: systemPrompt,
tools,
messages,
})
if (response.stop_reason === 'end_turn') {
for (const block of response.content) {
if (block.type === 'text') {
return block.text
}
}
return 'Agent completed'
}
if (response.stop_reason === 'tool_use') {
messages.push({
role: 'assistant',
content: response.content,
})
const toolResults: Anthropic.ToolResultBlockParam[] = []
for (const block of response.content) {
if (block.type === 'tool_use') {
const result = await executeTool(block.name, block.input as Record<string, unknown>)
toolResults.push({
type: 'tool_result',
tool_use_id: block.id,
content: result,
})
}
}
messages.push({
role: 'user',
content: toolResults,
})
}
}
return 'Agent reached maximum turns'
}
// Execute
async function main() {
const result = await runDataAnalysisAgent('What was the revenue trend in Q1 2025?')
console.log(result)
}
main().catch(console.error)
5. Comparing AI Harness Frameworks
Framework Comparison
| Framework | Language | Core Concept | Harness Level | Learning Curve |
|---|---|---|---|---|
| Claude Agent SDK | Python/TS | Tools, permissions, events | Most complete | Medium |
| LangGraph | Python | Graph-based workflows | High | High |
| CrewAI | Python | Multi-agent collaboration | Medium | Low |
| AutoGen | Python | Agent conversations | Medium | Medium |
| Semantic Kernel | C#/Python | MS ecosystem integration | Medium | Medium |
| DSPy | Python | Prompt optimization | Low | High |
Claude Agent SDK
Strengths:
- Optimal compatibility with Anthropic models
- Built-in permissions model, tool use, and event system
- Production-level stability
Harness implementation:
- System Prompt + Tools + Permissions integrated at the SDK level
- Agent loop is built-in; no separate implementation needed
LangGraph
Strengths:
- Visualize complex workflows as graphs
- State management, conditional branching, parallel execution
- Checkpoint and rollback capabilities
Harness implementation:
from langgraph.graph import StateGraph, END
from typing import TypedDict
class ReviewState(TypedDict):
code_diff: str
security_issues: list
performance_issues: list
review_summary: str
def analyze_security(state: ReviewState) -> ReviewState:
"""Analyze security vulnerabilities."""
state["security_issues"] = ["SQL injection in login.py:42"]
return state
def analyze_performance(state: ReviewState) -> ReviewState:
"""Analyze performance issues."""
state["performance_issues"] = ["N+1 query in users.py:87"]
return state
def generate_review(state: ReviewState) -> ReviewState:
"""Generate the final review."""
issues = state["security_issues"] + state["performance_issues"]
state["review_summary"] = f"Found {len(issues)} issues"
return state
# Build the graph
workflow = StateGraph(ReviewState)
workflow.add_node("security", analyze_security)
workflow.add_node("performance", analyze_performance)
workflow.add_node("review", generate_review)
workflow.set_entry_point("security")
workflow.add_edge("security", "performance")
workflow.add_edge("performance", "review")
workflow.add_edge("review", END)
app = workflow.compile()
CrewAI
Strengths:
- Intuitive multi-agent model (role-based)
- "Agent = Role + Goal + Backstory" pattern
- Natural inter-agent collaboration
Harness implementation:
from crewai import Agent, Task, Crew
security_reviewer = Agent(
role="Security Reviewer",
goal="Find all security vulnerabilities in the code",
backstory="You are a senior security engineer with 15 years "
"of experience in application security.",
tools=[],
)
performance_reviewer = Agent(
role="Performance Reviewer",
goal="Identify performance bottlenecks and optimization opportunities",
backstory="You are a performance engineering specialist "
"who has optimized systems serving millions of users.",
tools=[],
)
review_task = Task(
description="Review the authentication module for security issues",
agent=security_reviewer,
expected_output="List of security vulnerabilities with severity ratings",
)
crew = Crew(
agents=[security_reviewer, performance_reviewer],
tasks=[review_task],
)
result = crew.kickoff()
6. Harness Design Patterns
Let us examine the key patterns you can apply when designing an AI harness.
6-1. Router Pattern
Analyze the input and route it to the appropriate specialized agent.
User Input
|
v
[Router Agent] -> "code review request" -> Code Review Agent
-> "data analysis request" -> Data Analysis Agent
-> "documentation request" -> Documentation Agent
-> "general question" -> General Assistant
Best for:
- Handling diverse request types
- When specialized agents are optimized for each domain
- When a single agent cannot cover all domains
6-2. Orchestrator-Worker Pattern
A central orchestrator splits work and distributes it to workers.
[Orchestrator]
|
+--- "Review file A" -> [Worker 1] -> Result A
|
+--- "Review file B" -> [Worker 2] -> Result B
|
+--- "Review file C" -> [Worker 3] -> Result C
|
v
[Orchestrator] -> Integrate A + B + C -> Final Review
Best for:
- Processing large tasks in parallel
- When work can be split into independent units
- When integration logic is needed for the results
6-3. Pipeline Pattern
Sequential processing where each stage's output becomes the next stage's input.
[Analysis Agent] -> Code analysis results
|
v
[Planning Agent] -> Refactoring plan
|
v
[Execution Agent] -> Code modifications
|
v
[Validation Agent] -> Test execution + result verification
Best for:
- Tasks with a clear sequence
- When each step depends on the previous step's output
- When stage-by-stage quality validation is needed
6-4. Evaluator-Optimizer Pattern
Evaluate results and retry if they fall below standards.
[Generator Agent] -> Draft
|
v
[Evaluator Agent] -> Quality score (1-10)
|
+--- Score >= 8 -> Done
|
+--- Score < 8 -> Send feedback to Generator for revision
|
v
[Generator Agent] -> Revised draft
|
v
[Evaluator Agent] -> Re-evaluate
...
Best for:
- When output quality is paramount
- When quality can be objectively measured
- When iterative improvement is possible
6-5. Guardrails Pattern
Ensure safe AI behavior through input/output validation.
User Input
|
v
[Input Guardrail] -> Harmful content filtering
| Prompt injection detection
| Input normalization
v
[AI Agent] -> Perform task
|
v
[Output Guardrail] -> PII masking
| Hallucination detection
| Format validation
v
Final Response
Implementation example:
class InputGuardrail:
"""Validate and sanitize inputs."""
def validate(self, user_input: str) -> tuple:
# Detect prompt injection
injection_patterns = [
"ignore previous instructions",
"system prompt",
"you are now",
"disregard all",
]
for pattern in injection_patterns:
if pattern.lower() in user_input.lower():
return False, "Potential prompt injection detected"
# Input length limit
if len(user_input) > 10000:
return False, "Input too long"
return True, "Valid input"
class OutputGuardrail:
"""Validate and sanitize outputs."""
def validate(self, output: str) -> str:
import re
# Mask email addresses
output = re.sub(
r"[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}",
"[EMAIL_REDACTED]",
output,
)
# Mask phone numbers
output = re.sub(
r"\b\d{3}[-.]?\d{4}[-.]?\d{4}\b",
"[PHONE_REDACTED]",
output,
)
return output
7. Hands-On: Building Your Own Code Review Harness
Let us build a working code review harness from scratch.
7-1. Project Structure
code-review-harness/
src/
agent.py # Agent loop
tools.py # Tool definitions and execution
permissions.py # Permissions model
guardrails.py # Input/output validation
memory.py # Memory management
monitor.py # Monitoring
skills/
code-review.md # Code review skill
security-audit.md # Security audit skill
config/
settings.json # Configuration file
tests/
test_agent.py # Agent tests
README.md
requirements.txt
7-2. System Prompt Design
You are an expert code reviewer specializing in Python
and TypeScript applications.
## Your Responsibilities
1. Security: Find vulnerabilities (injection, XSS, auth issues)
2. Performance: Identify bottlenecks (N+1 queries, memory leaks)
3. Maintainability: Check code quality (naming, structure, SOLID)
4. Testing: Verify test coverage and quality
## Rules
- NEVER modify code directly. Only suggest changes.
- Always explain WHY something is an issue, not just WHAT.
- Provide code examples for every suggestion.
- Rate issues by severity: Critical, High, Medium, Low.
## Output Format
For each issue found:
[SEVERITY] file:line - Description
Suggestion: How to fix it
Example: Code snippet showing the fix
7-3. Tool Implementation
# tools.py
import subprocess
import os
def git_diff(base_branch: str = "main") -> str:
"""Get the diff between the current branch and the base branch."""
result = subprocess.run(
["git", "diff", f"{base_branch}...HEAD"],
capture_output=True,
text=True,
)
return result.stdout
def read_file(path: str) -> str:
"""Read file contents with line numbers."""
try:
with open(path, "r") as f:
lines = f.readlines()
numbered = [f"{i+1}: {line}" for i, line in enumerate(lines)]
return "".join(numbered)
except FileNotFoundError:
return f"File not found: {path}"
def search_pattern(pattern: str, directory: str = ".") -> str:
"""Search for a pattern in the codebase."""
result = subprocess.run(
["grep", "-rn", pattern, directory,
"--include=*.py", "--include=*.ts",
"--include=*.tsx", "--include=*.js"],
capture_output=True,
text=True,
)
return result.stdout or "No matches found"
def run_tests(test_path: str = "") -> str:
"""Run the test suite."""
cmd = ["python", "-m", "pytest", "-v"]
if test_path:
cmd.append(test_path)
result = subprocess.run(cmd, capture_output=True, text=True)
return f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
def get_file_history(path: str, count: int = 5) -> str:
"""Get the git history of a file."""
result = subprocess.run(
["git", "log", f"-{count}", "--oneline", "--", path],
capture_output=True,
text=True,
)
return result.stdout
7-4. Skill Definition
# skills/code-review.md
---
trigger: "when user asks to review code or a PR"
description: "Comprehensive code review workflow"
---
## Code Review Workflow
### Step 1: Gather Context
1. Run git diff to see all changes
2. Read the changed files completely
3. Check project conventions (CLAUDE.md, .eslintrc)
4. Identify the purpose of the changes
### Step 2: Security Analysis
Check for these common vulnerabilities:
- SQL injection (raw queries, string interpolation)
- XSS (unescaped user input in HTML)
- Authentication issues (weak tokens, missing checks)
- Authorization issues (missing role checks)
- Sensitive data exposure (API keys, passwords in code)
### Step 3: Performance Analysis
Check for these common issues:
- N+1 database queries
- Missing database indexes
- Unnecessary re-renders (React)
- Memory leaks (unclosed resources)
- Blocking operations in async code
### Step 4: Code Quality Analysis
Check for these issues:
- Unclear naming conventions
- Functions exceeding 50 lines
- Missing error handling
- Duplicated code
- SOLID principle violations
### Step 5: Generate Review Report
Format each issue as:
[SEVERITY] file:line - Description
Use severity levels: Critical, High, Medium, Low, Info
7-5. Permissions Model
# permissions.py
class ReviewPermissions:
"""Define permissions for the code review agent."""
# Read-only: no code modification allowed
ALLOWED_TOOLS = [
"git_diff",
"read_file",
"search_pattern",
"run_tests",
"get_file_history",
]
BLOCKED_TOOLS = [
"write_file",
"delete_file",
"execute_command",
"deploy",
]
ALLOWED_PATHS = [
"src/",
"tests/",
"lib/",
"config/",
]
BLOCKED_PATHS = [
".env",
"secrets/",
"credentials/",
".git/",
]
def is_tool_allowed(self, tool_name: str) -> bool:
if tool_name in self.BLOCKED_TOOLS:
return False
return tool_name in self.ALLOWED_TOOLS
def is_path_allowed(self, path: str) -> bool:
for blocked in self.BLOCKED_PATHS:
if blocked in path:
return False
return any(path.startswith(p) for p in self.ALLOWED_PATHS)
8. Harness Evaluation and Monitoring
Harness Quality Metrics
Key metrics for measuring AI harness quality:
| Metric | Description | Target |
|---|---|---|
| Tool use accuracy | Correct tool called with correct parameters | 95%+ |
| Task completion rate | Successfully completing user requests | 90%+ |
| Safety violation frequency | Bypassing guardrails or violating permissions | 0% |
| Average turns | Mean agent-loop iterations to completion | 5 or fewer |
| Cost efficiency | Average token usage and API cost per task | Varies by task |
LLM-as-Judge
Use another LLM to evaluate the quality of the agent's output:
def evaluate_review_quality(
original_code: str,
review_output: str,
evaluator_model: str = "claude-sonnet-4-20250514"
) -> dict:
"""Evaluate code review quality using an LLM."""
evaluation_prompt = f"""
Evaluate this code review on a scale of 1-10 for each criterion:
1. Accuracy: Are the identified issues real problems?
2. Completeness: Were important issues missed?
3. Actionability: Are suggestions specific and implementable?
4. Communication: Is the review clear and constructive?
Original Code:
{original_code}
Review Output:
{review_output}
Respond in JSON format:
accuracy, completeness, actionability, communication, overall, feedback
"""
# Call LLM for evaluation
# ... (actual API call code)
return evaluation_result
A/B Testing
Compare versions when changing prompts or tools:
- Establish baseline: Measure current harness performance
- Apply changes: New system prompt, tools, or skills
- Test with identical inputs: Run both versions on the same test cases
- Compare metrics: Accuracy, completion rate, cost
- Statistical significance: Verify the difference is not due to chance
Cost Tracking
class CostTracker:
"""Track API call costs for the agent."""
PRICING = {
"claude-sonnet-4-20250514": {
"input": 0.003 / 1000,
"output": 0.015 / 1000,
},
"claude-opus-4-20250514": {
"input": 0.015 / 1000,
"output": 0.075 / 1000,
},
}
def __init__(self):
self.total_input_tokens = 0
self.total_output_tokens = 0
self.model = "claude-sonnet-4-20250514"
def add_usage(self, input_tokens: int, output_tokens: int):
self.total_input_tokens += input_tokens
self.total_output_tokens += output_tokens
def get_total_cost(self) -> float:
pricing = self.PRICING[self.model]
return (
self.total_input_tokens * pricing["input"]
+ self.total_output_tokens * pricing["output"]
)
def get_report(self) -> str:
cost = self.get_total_cost()
return (
f"Input tokens: {self.total_input_tokens:,}\n"
f"Output tokens: {self.total_output_tokens:,}\n"
f"Total cost: ${cost:.4f}"
)
9. 2025-2026 Harness Trends
Model-Native Tool Calling
Early LLMs needed to be taught tool usage through prompts. The latest 2025 models have internalized tool usage during training. Claude Sonnet 4 and GPT-4o natively support Function Calling.
Impact on harnesses:
- Accurate tool usage even with concise descriptions
- Autonomous planning of complex tool combinations
- Improved error recovery
The Rise of Autonomous Agents
2025 is the breakout year for autonomous agents:
- Devin: An AI software engineer by Cognition
- Claude Code: Anthropic's CLI-based coding agent
- OpenAI Codex (CLI): OpenAI's coding agent
- GitHub Copilot Agent Mode: GitHub's agent mode
All of these use sophisticated harnesses. The differentiator is not the model's capability but the harness design.
Multimodal Harnesses
Harnesses that once handled only text are expanding to images, audio, and video:
- Screenshot analysis: UI bug detection, design review
- Diagram understanding: Convert architecture diagrams to code
- Voice interfaces: Coding instructions via voice
- Video analysis: Analyze user session recordings
Agent-to-Agent Protocols
MCP (Model Context Protocol) is evolving into a standard communication protocol between AI agents:
- Agent A uses Agent B's tools
- Task delegation and result sharing between agents
- Heterogeneous agent collaboration (Claude + GPT + Gemini)
Harness Standardization
Currently each framework implements harnesses differently, but standardization efforts are underway:
- MCP: Tool connection standard
- Agent Protocol: Agent interface standard
- OpenAPI for Agents: API-based agent definitions
By 2026, a "harness standard" is expected to emerge, allowing identical building blocks across any framework.
10. Quiz
Q1: In the AI harness metaphor, what corresponds to the "wild horse"?
Answer: The raw LLM (foundation models like GPT-4, Claude, Gemini)
A wild horse has tremendous power (intelligence) but without a harness, that power cannot be directed. Similarly, a raw LLM cannot perform practical tasks without tools, context, permissions, skills, and other harness components.
Q2: List all 7 building blocks of an AI harness.
Answer:
- System Prompt
- Tools
- Context
- Skills
- Hooks
- Permissions
- Memory
These seven components combine to transform a raw LLM into a practical AI agent.
Q3: What is the difference between PreToolUse hooks and PostToolUse hooks?
Answer:
- PreToolUse: Runs before tool execution. Used for input validation, permission checking, and blocking dangerous commands. Can abort tool execution on validation failure.
- PostToolUse: Runs after tool execution. Used for result validation, auto-formatting (prettier, eslint), and auto-testing.
This is analogous to Git's pre-commit (validate before commit) and post-commit (notify after commit) hooks.
Q4: What is the difference between the Orchestrator-Worker pattern and the Pipeline pattern?
Answer:
- Orchestrator-Worker pattern: A central orchestrator splits work and distributes it to multiple workers in parallel. Each worker operates independently, and the orchestrator integrates the results. Example: reviewing multiple files simultaneously.
- Pipeline pattern: Work is processed sequentially. Each stage's output becomes the next stage's input. Example: analysis then planning then execution then validation.
The key difference is parallel execution (Orchestrator-Worker) vs sequential execution (Pipeline).
Q5: What did the center of gravity in AI engineering shift to in 2025, away from "model training"? And why?
Answer: Model Orchestration
Reasons:
- Foundation models became powerful enough that fine-tuning is unnecessary for most tasks
- The same model produces vastly different results depending on harness design
- Enterprise needs for security, auditing, permission management, and cost tracking became essential
- Autonomous agents like Devin and Claude Code differentiate through sophisticated harness design
The key question shifted from "which model to use?" to "how to wrap and control the model?"
11. References
- Anthropic, "Claude Agent SDK Documentation," 2025
- Anthropic, "Model Context Protocol (MCP) Specification," 2025
- Anthropic, "Claude Code: An Agentic Coding Tool," 2025
- Harrison Chase, "LangGraph: Building Stateful Agent Workflows," LangChain Blog, 2025
- CrewAI, "Multi-Agent Orchestration Framework Documentation," 2025
- Microsoft, "AutoGen: Enabling Next-Gen LLM Applications," 2025
- Microsoft, "Semantic Kernel Documentation," 2025
- OpenAI, "Function Calling Guide," 2025
- Anthropic, "Building Effective Agents," Research Blog, 2025
- Lilian Weng, "LLM Powered Autonomous Agents," OpenAI Blog, 2024
- Shunyu Yao et al., "ReAct: Synergizing Reasoning and Acting in Language Models," ICLR, 2023
- Andrew Ng, "Agentic Design Patterns," DeepLearning.AI, 2025
- Simon Willison, "Building AI-Powered Tools with LLMs," Blog, 2025
- Chip Huyen, "Building LLM Applications for Production," 2025
- Devin AI, "How Devin Works: Architecture and Design," Cognition Blog, 2025
- GitHub, "Copilot Agent Mode: Architecture Deep Dive," GitHub Blog, 2025