About AlignTrue
What
AlignTrue keeps your AI rules in sync across agents, repos, and people.
- Solo devs: One source of truth for Cursor
.mdc,AGENTS.md,CLAUDE.md, and whatever else you use. - Teams: Shared rules that actually stay aligned across IDEs, agents, and repos, so best practices are practiced and not just vibes.
AlignTrue is MIT licensed , open to contributions and mostly self-documenting. If something is broken, outdated, or missing, open a PR.
Why
TL;DR
- No more ICs: Most “ICs” will spend more time managing agents than writing raw code. Everyone is a manager now.
- Unlock GDP: Future GDP is constrained more by bad AI usage than by bad models. Make ‘doing it right’ easy.
- Level up humanity: Clear rules for agents also make human collaboration saner & faster. Aligned goals & priorities
AlignTrue is my attempt to fix that binding constraint so we can actually scale with AI instead of fighting it.
Details
I went deep with AI agents (especially when Sonnet 4.5 landed) and burned an irresponsible number of tokens fast. The power is obvious. So is the failure mode.
Most people aren’t failing because AI is weak. They’re failing because their guidance is scattered, inconsistent, and stale. Rules drift. Different agents see different instructions. Teams quietly invent their own “norms” and the AI just amplifies the chaos.
In regulated work, where you can’t ship what you can’t explain, I explored the problem from the interpretability side with GlassAlpha . That surfaced a deeper issue: if your rules aren’t aligned, nothing else is.
So, alignTrue starts further upstream: make it trivial to define, reuse, and sync rules everywhere so you get leverage instead of chaos.
I believe a meaningful chunk of future GDP growth is locked behind bad AI usage, not bad AI models. AlignTrue is how we fix that binding constraint to unlock what comes next 🚀
Who
I’m Gabe Mays . I love building tools that close the gap between AI hype and real output.