# The Self-Learning System Case Study #1 from syNRGy By Navya Rehani Gupta (NRG) https://synrgy-nrg.com/self-learning-system --- A system that makes me sharper every week, automatically. ## How It Works Claude doesn't remember anything between sessions. Every conversation starts blank. So I built the memory myself: three plain-text files that load at the start of every AI session. Each one compounds. | File | Purpose | |------|---------| | Working Preferences | How I think and decide | | Mistake Patterns | AI errors I've caught and codified | | Session Learnings | Running improvement log | **Every session loop:** Load context > Work gets done > Capture insights > Catch AI errors **Every Friday loop:** Score 5 dimensions > Compare to top operators > Surface upgrades > Next week is smarter The structured approach (CLAUDE.md, hooks, skills) comes from Dave Killeen's Dex (https://heydex.ai/), an open-source AI operating system for professionals. Scoring framework, automated briefing pipeline, and AI mistake codification built on that foundation. ## Why I Built This I realized I had no way to measure whether I was improving with AI or just using it more. That bothered me. We're all using AI now. But are we getting better at it, or just busier with it? So I built a system that answers that, every week. Two hours to build. Returns compound every week. It's already changed how I make decisions, how fast I ship, and what I catch before it becomes a problem. ## The Friday Email Every Friday morning, an automated email tells me whether I leveled up or coasted. It scores me across five dimensions: - **Automation:** Do my workflows run with minimal intervention? - **Learning & Memory:** Does every session build on the last? - **Custom Workflows:** Am I eliminating repetition? - **Ecosystem Reach:** Am I using the best tools available? - **Resource Awareness:** Am I being efficient with what I have? ### My Weekly Scorecard (7.8/10) | Dimension | Score | Note | |-----------|-------|------| | Automation | 8/10 | Daily runs with retries. Next: parallel job execution | | Learning & Memory | 9/10 | Every session compounds. Best dimension in the system | | Custom Workflows | 7/10 | 10 active. Several manual processes still not converted | | Ecosystem Reach | 6/10 | New AI capabilities ship weekly. Room to adopt faster | | Resource Awareness | 7/10 | Token tracking works. Not yet proactive about cost optimization | Next milestone: 9.0/10 (+2 Ecosystem, +1 Automation) This score will never be 10. The tools evolve too fast. The point is to keep climbing. ## Technical Architecture - **Memory Layer:** Three plain-text markdown files. No databases, no APIs. Human-readable, AI-readable, version-controlled. - **Automation Layer:** Hooks fire on specific events: session-start loads learning files, auto-capture prompts for insights after significant work. - **Skills Layer:** Ten reusable workflows as slash commands, each built after repeating the same workflow three times. - **Scheduling Layer:** Five automated jobs M-F: 5am briefing, model updates, industry digest, Friday review, catch-up retries. All with validation gates and duplicate prevention. - **Weekly Scripts:** Friday review analyzes session patterns, researches top operators, scores my setup across five dimensions, and emails a report with upgrade recommendations. ### File Structure ``` ~/.claude/ ├── CLAUDE.md # Master instructions ├── settings.json # Hooks configuration ├── hooks/ # Automated triggers ├── skills/ # Reusable workflows (10) ├── scripts/ │ ├── daily-briefing.sh # 5am industry briefing │ ├── claude-updates.sh # Model + changelog check │ ├── agentic-ai-digest.py # Competitive intel digest │ ├── friday-weekly-review.sh # Full weekly analysis │ └── catchup-missed-jobs.sh # Retry at 9am, 12pm, 2pm └── learning/ ├── Working_Preferences.md # How I operate ├── Mistake_Patterns.md # AI errors I've caught and codified └── Session_Learnings.md # Running improvement log ``` ## What I Learned The returns started compounding in week two. Three things surprised me: The biggest win for a busy operator: every week I got actionable tips I never would have found on my own. The system surfaced three things automatically: - What other top operators were doing that I wasn't. Techniques and workflows I'd never have time to research myself. - Things I was doing repeatedly that needed to be automated. Patterns I couldn't see because I was too close to the work. - New model improvements I needed to be aware of. Capabilities that shipped while I was heads-down on other things. I also learned that most of my improvement came from AI mistakes the system caught and codified on its own. When the AI gets something wrong, the system writes a prevention rule automatically. I don't maintain the file. It maintains itself. The Friday score made me honest. I couldn't tell myself I was improving when the number said otherwise. If I were starting over, I'd build the mistake patterns file first. You're debugging and training the AI. Everything else layers on top of that. ## Compare Notes Curious how your AI usage is actually compounding? I'd love to hear what you'd measure. Connect on LinkedIn: https://www.linkedin.com/in/navyarehani/ --- Learning loop inspired by Dave Killeen's Dex (https://heydex.ai/). Scoring framework and automation pipeline built on that foundation. (c) 2026 Navya Rehani Gupta