commit c24786cfc1ae4be93c5209ddba9fb2b7cc210688
Author: Ben Sima <ben@bensima.com>
Date: Thu Jan 1 00:55:59 2026
Migrate subagent roles to skills (t-313)
Create skills that capture the essence of the legacy SubagentRoles:
- web-research: Quick web research (was WebCrawler)
- code-review: Code review with Reviewer role (was CodeReviewer)
- data-extraction: Structured data extraction (was DataExtractor)
- research: Deep research combining web + local (was Researcher)
Each skill documents the process, guidelines, and output format.
The code-review skill uses <role:reviewer> for role injection.
Task-Id: t-313
diff --git a/Omni/Agent/Skills/shared/code-review/SKILL.md b/Omni/Agent/Skills/shared/code-review/SKILL.md
new file mode 100644
index 00000000..8847d6a2
--- /dev/null
+++ b/Omni/Agent/Skills/shared/code-review/SKILL.md
@@ -0,0 +1,43 @@
+---
+name: code-review
+description: Review code changes for quality, correctness, and conventions. Use when asked to review a PR, diff, or code changes.
+---
+
+# Code Review
+
+<role:reviewer>
+Review the code with focus on:
+
+1. **Correctness**: Does it do what it's supposed to?
+2. **Testing**: Are there tests? Do they cover edge cases?
+3. **Conventions**: Does it follow project patterns?
+4. **Security**: Any obvious vulnerabilities?
+5. **Performance**: Any obvious inefficiencies?
+
+Use `search_codebase` to understand existing patterns.
+Use `run_bash` to run tests if needed.
+
+Provide specific, actionable feedback with file:line references.
+</role>
+
+## Review Format
+
+```
+## Summary
+[One sentence overall assessment]
+
+## Issues
+
+### [Critical/Major/Minor]: [Title]
+File: path/to/file.hs:42
+```haskell
+[problematic code]
+```
+Suggestion: [How to fix]
+
+## Good Things
+- [What's done well]
+
+## Verdict
+APPROVE / REQUEST_CHANGES / REJECT
+```
diff --git a/Omni/Agent/Skills/shared/data-extraction/SKILL.md b/Omni/Agent/Skills/shared/data-extraction/SKILL.md
new file mode 100644
index 00000000..39a9f83c
--- /dev/null
+++ b/Omni/Agent/Skills/shared/data-extraction/SKILL.md
@@ -0,0 +1,53 @@
+---
+name: data-extraction
+description: Extract structured data from web pages or documents. Use when you need to pull specific information into a structured format.
+---
+
+# Data Extraction
+
+## Process
+
+1. Identify the source (URL or file path)
+2. Determine what data to extract (fields, format)
+3. Read the source content
+4. Extract into structured format (JSON, table, etc.)
+
+## Guidelines
+
+- Be precise about what fields to extract
+- Handle missing data gracefully (use null)
+- Validate extracted data types
+- Include metadata about extraction (source, timestamp)
+
+## Output Format
+
+Return extracted data as JSON:
+```json
+{
+ "source": "url or file path",
+ "extracted_at": "ISO timestamp",
+ "data": {
+ "field1": "value1",
+ "field2": ["list", "values"],
+ "field3": null
+ }
+}
+```
+
+## Example: Extract Contact Info
+
+Input: "Extract contact information from https://example.com/about"
+
+Output:
+```json
+{
+ "source": "https://example.com/about",
+ "extracted_at": "2024-01-15T10:30:00Z",
+ "data": {
+ "name": "Example Corp",
+ "email": "contact@example.com",
+ "phone": "+1-555-0123",
+ "address": "123 Main St, City, ST 12345"
+ }
+}
+```
diff --git a/Omni/Agent/Skills/shared/research/SKILL.md b/Omni/Agent/Skills/shared/research/SKILL.md
new file mode 100644
index 00000000..cfd65299
--- /dev/null
+++ b/Omni/Agent/Skills/shared/research/SKILL.md
@@ -0,0 +1,72 @@
+---
+name: research
+description: Deep research on a topic combining web sources and local knowledge. Use for thorough investigation requiring multiple sources.
+---
+
+# Research
+
+## Process
+
+1. **Clarify** - What exactly needs to be researched?
+2. **Local first** - Search codebase/files for existing knowledge
+3. **Web search** - Find external sources to fill gaps
+4. **Cross-reference** - Verify claims across multiple sources
+5. **Synthesize** - Combine findings into coherent answer
+
+## Guidelines
+
+- Start with what you already know from the codebase
+- Use multiple search queries with different phrasings
+- Prefer authoritative sources (official docs, reputable sites)
+- Note confidence level for each finding
+- Acknowledge limitations and uncertainties
+
+## Output Format
+
+```json
+{
+ "question": "The research question",
+ "summary": "Brief overall summary (2-3 sentences)",
+ "confidence": 0.85,
+ "findings": [
+ {
+ "claim": "Key insight or finding",
+ "source_url": "https://...",
+ "source_name": "Official Documentation"
+ },
+ {
+ "claim": "Another finding",
+ "source_url": "local:Omni/Something.hs",
+ "source_name": "Codebase"
+ }
+ ],
+ "caveats": "Any limitations, uncertainties, or conflicting information",
+ "further_research": "Suggested follow-up questions if applicable"
+}
+```
+
+## Example
+
+Question: "How does the deployer handle config changes?"
+
+```json
+{
+ "question": "How does the deployer handle config changes?",
+ "summary": "The deployer only regenerates unit files when the nix store path changes. Config-only changes require a workaround.",
+ "confidence": 0.9,
+ "findings": [
+ {
+ "claim": "Unit files are regenerated based on store path comparison",
+ "source_url": "local:Omni/Deploy/Deployer.hs",
+ "source_name": "Deployer source"
+ },
+ {
+ "claim": "Known limitation: config changes without code changes don't trigger updates",
+ "source_url": "local:AGENTS.md",
+ "source_name": "Project docs"
+ }
+ ],
+ "caveats": "Workaround exists: make a code change or manually restart",
+ "further_research": null
+}
+```
diff --git a/Omni/Agent/Skills/shared/web-research/SKILL.md b/Omni/Agent/Skills/shared/web-research/SKILL.md
new file mode 100644
index 00000000..8b9b828b
--- /dev/null
+++ b/Omni/Agent/Skills/shared/web-research/SKILL.md
@@ -0,0 +1,32 @@
+---
+name: web-research
+description: Research a topic using web search. Use when you need to find current information, verify facts, or explore a topic online.
+---
+
+# Web Research
+
+## Process
+
+1. Formulate search queries - be specific, try multiple phrasings
+2. Use `web_search` to find relevant sources
+3. Use `read_webpages` to extract content from top 3-5 results
+4. Synthesize findings, always cite sources with URLs
+
+## Guidelines
+
+- Be EFFICIENT - extract key facts, don't save full page contents
+- Limit to 3-5 most relevant sources
+- Work iteratively: search → skim → read best 2-3 → synthesize
+- ALWAYS cite sources - every claim needs a URL
+- Stop when you have sufficient information
+
+## Example Output
+
+```
+Based on my research:
+
+[Key finding 1] - Source: https://example.com/article1
+[Key finding 2] - Source: https://example.com/article2
+
+Summary: [Brief synthesis of findings]
+```