A solopreneur faced a daily grind: sifting through Hacker News, Product Hunt, and Twitter to stay updated on tech trends. After spending 45 minutes each morning on scattered sources, the information overload left little time for meaningful action. The solution? A self-running AI briefing system that curates only the most relevant developments and explains why they matter for business decisions.
From manual scanning to automated insight
The system, named Tavily Intel Pulse, eliminates the need to jump between multiple platforms by delivering a structured briefing every morning at 7 AM. Instead of raw links, the briefing provides a concise executive summary, top three news items with three-part analysis, ecosystem signals, actionable ideas, and a daily reflection question. This transformation turns data collection into strategic input, allowing solopreneurs to focus on execution rather than information gathering.
The core innovation lies in its editorial approach—each item is not just listed but analyzed for its relevance, impact, and potential opportunity. For example, a Product Hunt launch isn’t just mentioned; it’s evaluated for its traction, funding, and alignment with a solopreneur’s specific needs. This shift from aggregation to analysis is what makes the system valuable.
A four-stage pipeline that runs smoothly
The briefing’s reliability comes from a resilient four-phase pipeline that ensures no step is left behind, even if errors occur. Each phase saves its output before proceeding, enabling the system to resume from the point of failure rather than restarting from scratch.
1. Collection phase: Scraping 20 tech sources
The pipeline begins by gathering data from 20 curated sources using the Tavily API, which offers a free tier of 1,000 credits per month. Sources include:
- Product Hunt launches and trending products
- Hacker News front page and top discussions
- Reddit communities like r/AI_Agents, r/webdev, and r/SaaS
- GitHub trending repositories
- Tech news outlets such as TechCrunch and The Verge
- Funding announcements from Crunchbase signals
The API extracts raw items, which are stored in a temporary JSON file for the next phase.
2. LLM analysis phase: Scoring and prioritization
Each collected item undergoes a scoring process that evaluates its potential impact based on predefined metrics. The scoring system assigns points for factors like monthly recurring revenue (MRR), upvotes, GitHub stars, funding rounds, user base, and mentions of SaaS, agent frameworks, or solopreneur-focused tools. Negative points are applied to generic or low-value content.
The scoring caps at 100 points per item, with a threshold of 20 points required to qualify for deeper analysis. Top-scoring items receive editorial analysis using GPT-4o-mini, a cost-effective model that provides concise summaries, relevance assessments, and potential business opportunities. This phase also includes deep extraction to pull critical details from each source.
3. Notion delivery phase: Structured knowledge storage
The enriched briefing is automatically saved to a Notion database, organized into five sections:
- A two-minute executive summary
- Top three news items with detailed analysis
- Five additional ecosystem signals
- A cross-signal insight connecting related trends
- A daily reflection question
Each entry includes structured metadata such as source, date, relevance score, and expected impact, making it easy to track trends over time.
4. Notification phase: Instant alerts via Telegram
A Telegram bot sends a markdown-formatted message at 7 AM, providing a quick preview of the briefing’s headline and daily question. The message includes a link to the full Notion page for deeper review. This ensures the briefing is accessible even on mobile devices.
Lessons learned from five iterations
Building this system wasn’t a one-time effort. Five major versions were required to achieve reliability and usefulness. Key lessons include:
- Phase architecture is essential – Early versions crashed entirely if one step failed. The current design saves progress at each stage, allowing the system to resume from where it left off. This resilience proved critical when the Tavily API rotated keys unexpectedly.
- Transparent scoring builds trust – The point-based system is adjustable and explainable. Changing the weight of MRR or upvotes takes seconds, eliminating the black-box nature of AI decisions.
- Editorial format trumps data volume – An aggregator might surface 50 links, but an editor highlights why one specific development matters. The difference lies in context, not collection.
- Cost efficiency is achievable – With Tavily’s free tier, GPT-4o-mini at approximately $0.15 per briefing, Notion’s free tier, and Telegram’s free bot service, the total monthly cost remains under $5. This makes the system accessible even for solo founders.
A complete skill for self-sufficient solopreneurs
The entire pipeline is documented as a Hermes Agent skill called tavily-intel-pulse. The skill includes:
- A Python script (
morning_briefing.py) that orchestrates all phases - Cron job configuration for daily scheduling
- Notion database schema for structured storage
- Scoring system with adjustable caps and thresholds
- Error handling tailored to each phase
- API key rotation mechanisms
The file structure is designed for clarity and maintainability:
~/.hermes/scripts/
├── morning_briefing.py # Main script (v5.0)
├── data/
│ └── dedup_history.json # URL deduplication history (3-day window)
└── tmp/
├── f1_items_YYYYMMDD.json
├── f2_enriched_YYYYMMDD.json
├── f3_notion_id_YYYYMMDD.txt
└── f4_telegram_sent_YYYYMMDD.txtThis structure ensures the system is reproducible, debuggable, and adaptable to future needs.
Why this matters for independent builders
Solopreneurs and small teams often lack dedicated market intelligence resources, yet they face constant decisions about product direction, pricing, and tech stack. Without context, even the most promising opportunity can go unnoticed. This AI-driven briefing bridges that gap by providing curated, actionable insights in minutes.
For instance, the system might highlight a Product Hunt launch with strong traction in the agent ecosystem, prompting a solopreneur to evaluate whether their own product could benefit from integrating an MCP server. Or it might flag a funding round in a niche category, signaling emerging competition or partnership opportunities.
The goal isn’t to replace critical thinking but to accelerate it. By automating the heavy lifting of information filtering, solopreneurs can spend more time building and less time sifting through noise.
Question to reflect on today
If you’re spending time consuming information without acting on it, what’s one change you could make to turn that input into output?
AI summary
Sabahları saatlerce harcadığınız haber taramasını sadece 2 dakikada özetleyen otomatik bir sistem nasıl oluşturulur? Ücretsiz API'ler ve basit puanlama kurallarıyla çalışan bu yapı, bağımsız girişimciler için nasıl devrim yaratabilir?