Read Hacker News
Read and filter the Hacker News front page to find relevant stories.
Process
- Choose story list - Top, new, or best stories from HN API
- Fetch story IDs - Get array of story IDs from Firebase API
- Set filters - Minimum score, comment count, or keyword criteria
- Fetch story details - Get individual story data for each ID
- Apply filters - Check score, comments, title keywords
- Format output - Display as readable list with metadata
Examples
Fetch and display stories from HN with filtering options:
- Get top/new/best stories
- Filter by score, comment count, or keywords
- Display formatted list with metadata
API Endpoints
Base URL: https://hacker-news.firebaseio.com/v0
# Get list of story IDs
curl -s "https://hacker-news.firebaseio.com/v0/topstories.json" # Top stories (up to 500 IDs)
curl -s "https://hacker-news.firebaseio.com/v0/newstories.json" # New stories
curl -s "https://hacker-news.firebaseio.com/v0/beststories.json" # Best stories
# Get story details by ID
curl -s "https://hacker-news.firebaseio.com/v0/item/12345.json"
Story JSON Structure
{
"by": "author_username",
"descendants": 42,
"id": 12345,
"kids": [123, 456],
"score": 150,
"time": 1234567890,
"title": "Story title",
"type": "story",
"url": "https://example.com"
}
Fields:
by- Author usernamedescendants- Total comment countid- Story IDkids- Array of top-level comment IDsscore- Pointstime- Unix timestamptitle- Story titletype- “story”, “ask”, “show”, “job”, “poll”url- External URL (missing for Ask HN/text posts)
Example Commands
Fetch and display top 10 stories:
# Get story IDs
IDS=$(curl -s "https://hacker-news.firebaseio.com/v0/topstories.json" | jq -r '.[:10][]')
# Fetch and format each story
echo "Top 10 stories from Hacker News:"
echo ""
COUNT=1
for id in $IDS; do
DATA=$(curl -s "https://hacker-news.firebaseio.com/v0/item/$id.json")
TITLE=$(echo "$DATA" | jq -r '.title // "Untitled"')
URL=$(echo "$DATA" | jq -r '.url // "Ask HN"')
SCORE=$(echo "$DATA" | jq -r '.score // 0')
COMMENTS=$(echo "$DATA" | jq -r '.descendants // 0')
AUTHOR=$(echo "$DATA" | jq -r '.by // "unknown"')
TIME=$(echo "$DATA" | jq -r '.time // 0')
# Calculate relative time
NOW=$(date +%s)
DIFF=$((NOW - TIME))
HOURS=$((DIFF / 3600))
if [ $HOURS -lt 1 ]; then
AGE="$((DIFF / 60)) minutes ago"
elif [ $HOURS -lt 24 ]; then
AGE="$HOURS hours ago"
else
AGE="$((HOURS / 24)) days ago"
fi
echo "$COUNT. [$SCORE pts, $COMMENTS comments] $TITLE"
echo " URL: $URL"
echo " By: $AUTHOR | $AGE"
echo " Discussion: https://news.ycombinator.com/item?id=$id"
echo ""
COUNT=$((COUNT + 1))
done
Filter by minimum score (50+ points):
# Get top 50 story IDs and filter
IDS=$(curl -s "https://hacker-news.firebaseio.com/v0/topstories.json" | jq -r '.[:50][]')
echo "Stories with 50+ points:"
echo ""
COUNT=1
for id in $IDS; do
DATA=$(curl -s "https://hacker-news.firebaseio.com/v0/item/$id.json")
SCORE=$(echo "$DATA" | jq -r '.score // 0')
# Filter by score
if [ $SCORE -ge 50 ]; then
TITLE=$(echo "$DATA" | jq -r '.title')
COMMENTS=$(echo "$DATA" | jq -r '.descendants // 0')
echo "$COUNT. [$SCORE pts, $COMMENTS comments] $TITLE"
echo " https://news.ycombinator.com/item?id=$id"
echo ""
COUNT=$((COUNT + 1))
fi
done
Filter by keyword in title:
# Search for stories about "AI" or "LLM"
IDS=$(curl -s "https://hacker-news.firebaseio.com/v0/topstories.json" | jq -r '.[:100][]')
echo "Stories about AI/LLM:"
echo ""
COUNT=1
for id in $IDS; do
DATA=$(curl -s "https://hacker-news.firebaseio.com/v0/item/$id.json")
TITLE=$(echo "$DATA" | jq -r '.title // ""')
# Case-insensitive search
if echo "$TITLE" | grep -iq '\(AI\|LLM\|GPT\|ChatGPT\|Claude\)'; then
SCORE=$(echo "$DATA" | jq -r '.score // 0')
COMMENTS=$(echo "$DATA" | jq -r '.descendants // 0')
AUTHOR=$(echo "$DATA" | jq -r '.by // "unknown"')
echo "$COUNT. [$SCORE pts, $COMMENTS comments] $TITLE"
echo " By: $AUTHOR"
echo " https://news.ycombinator.com/item?id=$id"
echo ""
COUNT=$((COUNT + 1))
fi
done
Filter by minimum comment count (10+ comments):
# Get stories with active discussion
IDS=$(curl -s "https://hacker-news.firebaseio.com/v0/topstories.json" | jq -r '.[:30][]')
echo "Stories with 10+ comments:"
echo ""
COUNT=1
for id in $IDS; do
DATA=$(curl -s "https://hacker-news.firebaseio.com/v0/item/$id.json")
COMMENTS=$(echo "$DATA" | jq -r '.descendants // 0')
if [ $COMMENTS -ge 10 ]; then
TITLE=$(echo "$DATA" | jq -r '.title')
SCORE=$(echo "$DATA" | jq -r '.score // 0')
AUTHOR=$(echo "$DATA" | jq -r '.by // "unknown"')
echo "$COUNT. [$SCORE pts, $COMMENTS comments] $TITLE"
echo " By: $AUTHOR"
echo " https://news.ycombinator.com/item?id=$id"
echo ""
COUNT=$((COUNT + 1))
fi
done
Filter Ask HN posts:
# Find Ask HN posts (stories without external URL)
IDS=$(curl -s "https://hacker-news.firebaseio.com/v0/topstories.json" | jq -r '.[:50][]')
echo "Ask HN posts:"
echo ""
COUNT=1
for id in $IDS; do
DATA=$(curl -s "https://hacker-news.firebaseio.com/v0/item/$id.json")
URL=$(echo "$DATA" | jq -r '.url // ""')
TITLE=$(echo "$DATA" | jq -r '.title // ""')
# Ask HN posts typically have no URL or title starts with "Ask HN"
if [ -z "$URL" ] || echo "$TITLE" | grep -iq '^Ask HN'; then
SCORE=$(echo "$DATA" | jq -r '.score // 0')
COMMENTS=$(echo "$DATA" | jq -r '.descendants // 0')
AUTHOR=$(echo "$DATA" | jq -r '.by // "unknown"')
echo "$COUNT. [$SCORE pts, $COMMENTS comments] $TITLE"
echo " By: $AUTHOR"
echo " https://news.ycombinator.com/item?id=$id"
echo ""
COUNT=$((COUNT + 1))
fi
done
Output Format
Results should be displayed as a readable list:
Top 20 stories from Hacker News:
1. [150 pts, 42 comments] Standard Ebooks: Public Domain Day 2026 in Literature
URL: https://standardebooks.org/blog/public-domain-day-2026
By: WithinReason | 2 hours ago
Discussion: https://news.ycombinator.com/item?id=46462702
2. [120 pts, 35 comments] Ask HN: What are you working on?
URL: Ask HN
By: username | 3 hours ago
Discussion: https://news.ycombinator.com/item?id=46462719
3. [95 pts, 18 comments] Show HN: My Weekend Project
URL: https://example.com/project
By: maker | 5 hours ago
Discussion: https://news.ycombinator.com/item?id=46462800
Combining Filters
You can combine multiple filters:
# Stories with 50+ points AND 20+ comments about "startup"
IDS=$(curl -s "https://hacker-news.firebaseio.com/v0/topstories.json" | jq -r '.[:100][]')
echo "Popular startup stories:"
echo ""
COUNT=1
for id in $IDS; do
DATA=$(curl -s "https://hacker-news.firebaseio.com/v0/item/$id.json")
SCORE=$(echo "$DATA" | jq -r '.score // 0')
COMMENTS=$(echo "$DATA" | jq -r '.descendants // 0')
TITLE=$(echo "$DATA" | jq -r '.title // ""')
if [ $SCORE -ge 50 ] && [ $COMMENTS -ge 20 ] && echo "$TITLE" | grep -iq 'startup'; then
AUTHOR=$(echo "$DATA" | jq -r '.by // "unknown"')
URL=$(echo "$DATA" | jq -r '.url // "Ask HN"')
echo "$COUNT. [$SCORE pts, $COMMENTS comments] $TITLE"
echo " URL: $URL"
echo " By: $AUTHOR"
echo " https://news.ycombinator.com/item?id=$id"
echo ""
COUNT=$((COUNT + 1))
fi
done
Error Handling
Handle missing fields and network errors:
# Check if curl succeeded
if ! IDS=$(curl -s "https://hacker-news.firebaseio.com/v0/topstories.json" 2>/dev/null); then
echo "Error: Failed to fetch story list"
exit 1
fi
# Check if story data is valid
DATA=$(curl -s "https://hacker-news.firebaseio.com/v0/item/$id.json")
if [ -z "$DATA" ] || [ "$DATA" = "null" ]; then
echo "Warning: Could not fetch story $id, skipping..."
continue
fi
# Use default values for missing fields
TITLE=$(echo "$DATA" | jq -r '.title // "Untitled"')
SCORE=$(echo "$DATA" | jq -r '.score // 0')
COMMENTS=$(echo "$DATA" | jq -r '.descendants // 0')
AUTHOR=$(echo "$DATA" | jq -r '.by // "unknown"')
Rate Limiting
Be considerate of the API:
# Add small delays between requests
for id in $IDS; do
DATA=$(curl -s "https://hacker-news.firebaseio.com/v0/item/$id.json")
# Process data...
sleep 0.1 # 100ms delay
done
Integration with ParseHackerNews
Use this skill to find interesting posts, then use ParseHackerNews to deep-read them:
# 1. Find posts about a topic
IDS=$(curl -s "https://hacker-news.firebaseio.com/v0/topstories.json" | jq -r '.[:50][]')
for id in $IDS; do
DATA=$(curl -s "https://hacker-news.firebaseio.com/v0/item/$id.json")
TITLE=$(echo "$DATA" | jq -r '.title // ""')
if echo "$TITLE" | grep -iq 'kubernetes'; then
echo "Found interesting post: https://news.ycombinator.com/item?id=$id"
# 2. Use ParseHackerNews to extract full content and comments
# (See ParseHackerNews.md for detailed extraction)
fi
done
Tips
- Default to top 30 stories for quick scans
- Use top 100-200 for filtered searches
- All 500 topstories can be slow (50+ seconds with delays)
- Combine score + comment filters to find quality discussions
- “Ask HN” and “Show HN” posts often have great discussions
- Sort by comment count to find controversial topics