syNRGy - Me + AI Synergy
By Navya Rehani Gupta (NRG) · CPTO at Talent.com · Global CPO Award Winner
About
I build systems with AI. 16+ hours saved per week, more headroom for the work that matters. That's the real synergy.
How I Operate
AI is how I think, build, and ship.
- Build tools and automate research
- Turn complexity into clear narratives
- Pressure-test strategy and trade-offs
- Accelerate first drafts
- Clarify language when stakes are high
Workflows
End-to-end pipelines connecting AI tools together, from meeting capture to competitive intel to automated deployments.
Case Study: The Self-Learning System - A system that scores my AI usage every week and tells me where to improve. Built in 2 hours, compounds every Friday.
Active Tools (2026)
Total weekly savings: 16+ hours
- AI Coding (Claude Code) - Build, edit, and ship from natural language (8 hrs/wk)
- Meeting Intelligence (Granola) - Auto-capture and extract action items (2 hrs/wk)
- Rapid Prototyping (Lovable) - Turn prompts into deployed web apps (2 hrs/wk)
- Instant Deploy (Vercel) - Ship to production from terminal (1 hr/wk)
- Workflow Automation (Zapier) - Connect tools and eliminate manual steps (1 hr/wk)
- Visual Storytelling (Nano Banana / Gemini) - Strategy markdown to visual infographics (2 hrs/wk)
Principles That Drive syNRGy
- Clarity over cleverness
- Judgment first, tools second
- First drafts are cheap. Decisions are not.
- Move fast, stay deliberate
- Ruthless simplicity
Compare Notes
Building something similar? I'd love to hear how you're approaching it.
Contact: LinkedIn
Back to syNRGy
Case Study #1
The Self-Learning System
A system that makes me sharper every week, automatically.
Why I Built This
I realized I had no way to measure whether I was improving with AI or just using it more. That bothered me.
We're all using AI now. But are we getting better at it, or just busier with it?
So I built a system that answers that, every week. Two hours to build. Returns compound every week. It's already changed how I make decisions, how fast I ship, and what I catch before it becomes a problem.
How It Works
Three plain-text files load at the start of every AI session. Each one compounds. The structured approach (CLAUDE.md, hooks, skills) comes from Dave Killeen's Dex, an open-source AI operating system for professionals. Scoring framework, automated briefing pipeline, and AI mistake codification built on that foundation.
The Three Files
- Working Preferences - How I think and decide
- Mistake Patterns - AI errors I've caught and codified
- Session Learnings - Running improvement log
Every Session
Load context > Work gets done > Capture insights > Catch AI errors
Every Friday
Score 5 dimensions > Compare to top operators > Surface upgrades > Next week is smarter
The Friday Email
Every Friday morning, an automated email tells me whether I leveled up or coasted. It scores me across five dimensions:
- Automation - Do my workflows run with minimal intervention?
- Learning & Memory - Does every session build on the last?
- Custom Workflows - Am I eliminating repetition?
- Ecosystem Reach - Am I using the best tools available?
- Resource Awareness - Am I being efficient with what I have?
My Weekly Scorecard (7.8/10)
- Automation: 8/10 - Daily runs with retries. Next: parallel job execution
- Learning & Memory: 9/10 - Every session compounds. Best dimension in the system
- Custom Workflows: 7/10 - 10 active. Several manual processes still not converted
- Ecosystem Reach: 6/10 - New AI capabilities ship weekly. Room to adopt faster
- Resource Awareness: 7/10 - Token tracking works. Not yet proactive about cost optimization
Next milestone: 9.0/10. This score will never be 10. The tools evolve too fast. The point is to keep climbing.
Technical Architecture
- Memory Layer: Three plain-text markdown files. No databases, no APIs. Human-readable, AI-readable, version-controlled.
- Automation Layer: Hooks fire on specific events: session-start loads learning files, auto-capture prompts for insights after significant work.
- Skills Layer: Ten reusable workflows as slash commands, each built after repeating the same workflow three times.
- Scheduling Layer: Five automated jobs M-F: 5am briefing, model updates, industry digest, Friday review, catch-up retries. All with validation gates and duplicate prevention.
- Weekly Scripts: Friday review analyzes session patterns, researches top operators, scores my setup across five dimensions, and emails a report with upgrade recommendations.
What I Learned
The system took about two hours to build. The returns started compounding in week two.
The biggest win for a busy operator: every week I got actionable tips I never would have found on my own. The system surfaced three things automatically:
- What other top operators were doing that I wasn't. Techniques and workflows I'd never have time to research myself.
- Things I was doing repeatedly that needed to be automated. Patterns I couldn't see because I was too close to the work.
- New model improvements I needed to be aware of. Capabilities that shipped while I was heads-down on other things.
I also learned that most of my improvement came from AI mistakes the system caught and codified on its own. When the AI gets something wrong, the system writes a prevention rule automatically. I don't maintain the file. It maintains itself.
If I were starting over, I'd build the mistake patterns file first. You're debugging and training the AI. Everything else layers on top of that.
Compare Notes
Curious how your AI usage is actually compounding? I'd love to hear what you'd measure.
Connect on LinkedIn
Learning loop inspired by Dave Killeen's Dex. Scoring framework and automation pipeline built on that foundation.