#!/usr/bin/env agent
Find Prospects Skill
Search online communities for people who have problems your product solves.
Related Skills
Biz/Skills/ParseReddit.md- Reddit JSON API parsingBiz/Skills/ParseHackerNews.md- HN Firebase API parsing
Input
You need these parameters (provided in the task or ask the user):
- product: What product/service are you finding prospects for?
- problem: What problem does it solve?
- communities: Where to search (reddit, hackernews, twitter, forums)
- search_terms: Keywords that indicate someone has this problem
Process
Phase 1: Search
-
Construct 3-5 targeted search queries combining:
- Site filter:
site:reddit.comorsite:news.ycombinator.com - Problem keywords from search_terms
- Frustration/desire signals: “wish”, “looking for”, “need”, “anyone know”, “is there”
- Site filter:
-
Run each search with
web_search, collect promising URLs
Phase 2: Deep Extraction
For each promising Reddit URL, use the ParseReddit skill (Biz/Skills/ParseReddit.md):
curl -s -H "User-Agent: agentbot/1.0" "REDDIT_URL.json" | jq '{
author: .[0].data.children[0].data.author,
title: .[0].data.children[0].data.title,
body: .[0].data.children[0].data.selftext,
score: .[0].data.children[0].data.score,
num_comments: .[0].data.children[0].data.num_comments,
created: .[0].data.children[0].data.created_utc,
subreddit: .[0].data.children[0].data.subreddit,
comments: [.[1].data.children[:3][] | .data | {author, body, score}]
}'
For Hacker News URLs, use the ParseHackerNews skill (Biz/Skills/ParseHackerNews.md):
# Extract item ID from URL like news.ycombinator.com/item?id=41824973
ID=41824973
curl -s "https://hacker-news.firebaseio.com/v0/item/$ID.json" | jq '{
author: .by,
title: .title,
text: .text,
score: .score,
time: .time,
comments: .descendants
}'
Phase 3: Qualify
For each extracted post, determine priority:
-
High priority:
- Explicitly asking for a solution (“is there an app”, “looking for”, “anyone know”)
- Recent (< 3 months old)
- Has engagement (comments, upvotes)
-
Medium priority:
- Expressing pain but not actively searching
- Older but still relevant
-
Skip:
-
6 months old
- Already solved
- Using a competitor
- Tangential mention
-
Phase 4: Output
Output Format
## Prospects Found
### High Priority
1. **u/username** in r/subreddit
- Posted: YYYY-MM-DD (X months ago)
- Score: N points, M comments
- Quote: "Exact quote from post..."
- URL: https://reddit.com/...
- Why: [Specific reason this is high priority]
### Medium Priority
2. **hn_user** on Hacker News
- Posted: YYYY-MM-DD
- Quote: "..."
- URL: https://news.ycombinator.com/item?id=...
- Why: [Reason]
### Skipped
- URL - Reason (too old, already solved, etc.)
### Search Queries Used
- query 1
- query 2
### Summary
- Queries run: X
- Posts examined: Y
- High priority: Z
- Medium priority: W
### Recommended Actions
1. [Specific suggestion for top prospect]
2. [Subreddit to monitor]
Quality Checklist
Before outputting, verify:
- Every username is REAL (from JSON, not “UNKNOWN”)
- Every quote is EXACT text from the post
- Dates are converted from Unix timestamps
- URLs work
Example
Product: PodcastItLater - converts articles to podcasts
Problem: Too many articles/newsletters to read, want to listen instead
Communities: reddit, hackernews
Search terms:
- "wish I could listen to articles"
- "newsletter text to speech"
- "too many newsletters to read"
- "article backlog"