Table des matières
MCP : https://gemini-design-mcp.com/ https://github.com/mcp
Markdown : https://markdown-ui.blueprintlab.io/
Turn LLM responses into real UI
Accessibility - Vibe Coding : https://idea11y.dev/VibeCheck/
Agend.md : https://www.humanlayer.dev/blog/writing-a-good-claude-md https://agents.md/
Tools : https://github.com/tobi/qmd
mini cli search engine for your docs, knowledge bases, meeting notes, whatever. Tracking current sota approaches while being all local
Articles :
AI has an accessibility problem: What devs can do about it https://blog.logrocket.com/ai-has-an-accessibility-problem/
Documentation : https://gerireid.com/blog/can-ai-write-accessibility-specs/
Claude Code for writers
https://www.platformer.news/claude-code-for-writers-tips-ideas/
A designer’s framework for better AI prompts
https://www.figma.com/blog/designer-framework-for-better-ai-prompts/
1 - Have you ever felt concerned about the size of your AGENTS.md file?
Maybe you should be. A bad AGENTS.md file can confuse your agent, become a maintenance nightmare, and cost you tokens on every request.
So you’d better know how to fix it.
What is AGENTS.md?
An AGENTS.md file is a markdown file you check into Git that customizes how AI coding agents behave in your repository. It sits at the top of the conversation history, right below the system prompt.
Think of it as a configuration layer between the agent’s base instructions and your actual codebase. The file can contain two types of guidance:
- Personal scope: Your commit style preferences, coding patterns you prefer
- Project scope: What the project does, which package manager you use, your architecture decisions
The AGENTS.md file is an open standard supported by many - though not all - tools. CLAUDE.md
Why Massive AGENTS.md Files are a Problem
There’s a natural feedback loop that causes AGENTS.md files to grow dangerously large:
The agent does something you don't like
You add a rule to prevent it
Repeat hundreds of times over months
File becomes a "ball of mud"
Different developers add conflicting opinions. Nobody does a full style pass. The result? An unmaintainable mess that actually hurts agent performance.
Another culprit: auto-generated AGENTS.md files. Never use initialization scripts to auto-generate your AGENTS.md. They flood the file with things that are “useful for most scenarios” but would be better progressively disclosed. Generated files prioritize comprehensiveness over restraint. The Instruction Budget
Kyle from Humanlayer’s article mentions the concept of an “instruction budget”:
Frontier thinking LLMs can follow ~ 150-200 instructions with reasonable consistency. Smaller models can attend to fewer instructions than larger models, and non-thinking models can attend to fewer instructions than thinking models.
Every token in your AGENTS.md file gets loaded on every single request, regardless of whether it’s relevant. This creates a hard budget problem: Scenario Impact Small, focused AGENTS.md More tokens available for task-specific instructions Large, bloated AGENTS.md Fewer tokens for the actual work; agent gets confused Irrelevant instructions Token waste + agent distraction = worse performance
Taken together, this means that the ideal AGENTS.md file should be as small as possible. Stale Documentation Poisons Context
Another issue for large AGENTS.md files is staleness.
Documentation goes out of date quickly. For human developers, stale docs are annoying, but the human usually has enough built-in memory to be skeptical about bad docs. For AI agents that read documentation on every request, stale information actively poisons the context.
This is especially dangerous when you document file system structure. File paths change constantly. If your AGENTS.md says “authentication logic lives in src/auth/handlers.ts " and that file gets renamed or moved, the agent will confidently look in the wrong place.
Instead of documenting structure, describe capabilities. Give hints about where things might be and the overall shape of the project. Let the agent generate its own just-in-time documentation during planning.
Domain concepts (like “organization” vs “group” vs “workspace”) are more stable than file paths, so they’re safer to document. But even these can drift in fast-moving AI-assisted codebases. Keep a light touch. Cutting Down Large AGENTS.md Files
Be ruthless about what goes here. Consider this the absolute minimum:
One-sentence project description (acts like a role-based prompt)
Package manager (if not npm; or use corepack for warnings)
Build/typecheck commands (if non-standard)
That’s honestly it. Everything else should go elsewhere. The One-Liner Project Description
This single sentence gives the agent context about why they’re working in this repository. It anchors every decision they make.
Example:
This is a React component library for accessible data visualization.
That’s the foundation. The agent now understands its scope. Package Manager Specification
If you’re In a JavaScript project and using anything other than npm, tell the agent explicitly:
This project uses pnpm workspaces.
Without this, the agent might default to npm and generate incorrect commands. Corepack is also great
Instead of cramming everything into AGENTS.md, use progressive disclosure: give the agent only what it needs right now, and point it to other resources when needed.
Agents are fast at navigating documentation hierarchies. They understand context well enough to find what they need. Move Language-Specific Rules to Separate Files
If your AGENTS.md currently says:
Always use const instead of let. Never use var. Use interface instead of type when possible. Use strict null checks. …
Move that to a separate file instead. In your root AGENTS.md:
For TypeScript conventions, see docs/TYPESCRIPT.md
Notice the light touch, no “always,” no all-caps forcing. Just a conversational reference.
The benefits:
TypeScript rules only load when the agent writes TypeScript
Other tasks (CSS debugging, dependency management) don't waste tokens
File stays focused and portable across model changes
Nest Progressive Disclosure
You can go even deeper. Your docs/TYPESCRIPT.md can reference docs/TESTING.md. Create a discoverable resource tree:
docs/ ├── TYPESCRIPT.md │ └── references TESTING.md ├── TESTING.md │ └── references specific test runners └── BUILD.md └── references esbuild configuration
You can even link to external resources, Prisma docs, Next.js docs, etc. The agent will navigate these hierarchies efficiently. Use Agent Skills
Many tools support “agent skills” - commands or workflows the agent can invoke to learn how to do something specific. These are another form of progressive disclosure: the agent pulls in knowledge only when needed.
We’ll cover agent skills in-depth in a separate article. AGENTS.md in Monorepos
You’re not limited to a single AGENTS.md at the root. You can place AGENTS.md files in subdirectories, and they merge with the root level.
This is powerful for monorepos: What Goes Where
| Level | Content |
|---|---|
| Root | Monorepo purpose, how to navigate packages, shared tools (pnpm workspaces) |
| Package | Package purpose, specific tech stack, package-specific conventions |
Root AGENTS.md:
This is a monorepo containing web services and CLI tools. Use pnpm workspaces to manage dependencies. See each package’s AGENTS.md for specific guidelines.
Package-level AGENTS.md (in packages/api/AGENTS.md):
This package is a Node.js GraphQL API using Prisma. Follow docs/API_CONVENTIONS.md for API design patterns.
Don’t overload any level. The agent sees all merged AGENTS.md files in its context. Keep each level focused on what’s relevant at that scope. Fix A Broken AGENTS.md With This Prompt
If you’re starting to get nervous about the AGENTS.md file in your repo, and you want to refactor it to use progressive disclosure, try copy-pasting this prompt into your coding agent:
I want you to refactor my AGENTS.md file to follow progressive disclosure principles.
Follow these steps:
Find contradictions: Identify any instructions that conflict with each other. For each contradiction, ask me which version I want to keep.
Identify the essentials: Extract only what belongs in the root AGENTS.md:
- One-sentence project description
- Package manager (if not npm)
- Non-standard build/typecheck commands
- Anything truly relevant to every single task
Group the rest: Organize remaining instructions into logical categories (e.g., TypeScript conventions, testing patterns, API design, Git workflow). For each group, create a separate markdown file.
Create the file structure: Output:
- A minimal root AGENTS.md with markdown links to the separate files
- Each separate file with its relevant instructions
- A suggested docs/ folder structure
Flag for deletion: Identify any instructions that are:
- Redundant (the agent already knows this)
- Too vague to be actionable
- Overly obvious (like “write clean code”)
Don’t Build A Ball Of Mud
When you’re about to add something to your AGENTS.md, ask yourself where it belongs: Location When to use Root AGENTS.md Relevant to every single task in the repo Separate file Relevant to one domain (TypeScript, testing, etc.) Nested documentation tree Can be organized hierarchically
The ideal AGENTS.md is small, focused, and points elsewhere. It gives the agent just enough context to start working, with breadcrumbs to more detailed guidance.
Everything else lives in progressive disclosure: separate files, nested AGENTS.md files, or skills.
This keeps your instruction budget efficient, your agent focused, and your setup future-proof as tools and best practices evolve. Share
2 - Run Claude Code inside my Obsidian
I run Claude Code inside my Obsidian vault through a terminal extension. This lets me treat the AI as a collaborator that can read, write, and traverse my notes.
The structure
My vault follows a light hierarchy:
- 01 Inbox for quick capture
- 02 Journal for reflections and plans
- 03 Garden for permanent, evergreen notes
- 04 Projects for active work, each in its own folder
- 05 Areas for ongoing life contexts
The Garden is where ideas mature. Notes there are atomic (one idea each), opinionated (stating positions rather than describing topics), and linked to each other. They accumulate slowly.
Projects live in separate folders. I can open a terminal in any project folder and give Claude Code the specific context it needs. The AI sees only what’s relevant.
Maps of Content
I maintain a handful of Maps of Content (MOCs) that act as entry points into clusters of ideas. These are pages that link to related notes on a theme: creative work, tools for thought, software philosophy, focus, durability, self-experimentation.
MOCs help me and the AI navigate. When I ask Claude to explore a topic, I can point it to the relevant MOC instead of hoping it finds the right notes through search alone.
How Claude Code fits in
The point is not to have AI write for me. It’s to think alongside something that can hold more context than I can in my head at once.
The terminal interface matters. I’m not pasting notes into a chat window. Claude can use tools: reading files, searching across the vault, writing drafts directly where they belong. It operates inside my system rather than alongside it.
Common patterns:
Filling gaps: I have scattered thoughts across journal entries and inbox notes. I ask Claude what’s missing, what I haven’t addressed, where the argument is weak.
Surfacing connections: It finds relationships between notes I hadn’t linked. Sometimes it surfaces tensions or contradictions I’d missed.
Deepening thinking: I describe a half-formed idea. Claude asks questions, challenges assumptions, helps me see angles I hadn’t considered. The goal is sharper thinking, not finished prose.
Drafting from my material: When I do ask it to write, it’s working from my notes, my fragments, my voice. The output is a starting point I’ll rewrite.
The AI becomes useful when it has real context and when I stay in the loop. A vault full of linked notes provides the context. Staying critical of the output keeps the thinking mine.
Daily practice
I write daily notes when something needs processing. These go in the Journal, not the Garden. They’re messy, personal, often questions more than answers.
Periodically I review the Journal and promote ideas worth keeping into proper evergreen notes. Claude can help with this: “What themes keep appearing in my recent journal entries?”
What this isn’t
The system is simple. There’s no elaborate tagging taxonomy, no complex automation, no perfect template. I’ve tried those approaches. They create maintenance burden that eventually collapses.
The current setup works because it’s light enough to actually use. Five folders. A few MOCs. Notes that link to each other. An AI that can read and write in place.
For those exploring this
The ideas behind evergreen notes come largely from Andy Matuschak (@andy_matuschak), who has published extensively on the topic at notes.andymatuschak.org. His work on making notes atomic, concept-oriented, and densely linked shaped how I think about the Garden. His notes are themselves an example of the method.
For the underlying methodology, I recommend How to Take Smart Notes by Sönke Ahrens. It explains the Zettelkasten approach that influenced much of modern networked note-taking. The core insight: writing is thinking, and a good note system makes thinking accumulate.
The right setup depends on what you’re trying to do. Mine is optimized for accumulating clear thinking over time and having an AI collaborator that can work with that accumulated context. Yours might need something different.
Start light and add structure only when you feel its absence.
3 - ULTIMATE PROMPT FOR LECTURES
1/ ULTIMATE PROMPT FOR LECTURES:
“Review all uploaded materials and generate 5 essential questions that capture the core meaning.
Focus on:
- Core topics and definitions
- Key concepts emphasized
- Relationships between concepts
- Practical applications mentioned”
2/ THE “5 ESSENTIAL QUESTIONS” PROMPT
Reddit called this a “game changer.” It forces NotebookLM to extract pedagogically-sound structure instead of shallow summaries:
“Analyze all inputs and generate 5 essential questions that, when answered, capture the main points and core meaning of all inputs.”
3/ STEVEN JOHNSON’S “INTERESTING BITS” PROMPT
NotebookLM’s director tested this on 500,000 words of NASA transcripts. Did 10 hours of manual work in 20 seconds:
“What are the most surprising or interesting pieces of information in these sources? Include key quotes.”
4/ EXTENDED VERSION WITH STEERING:
“I’m interested in writing about [TOPIC].
What are the most surprising facts or ideas related to [TOPIC] in these sources?
Include key quotes. Focus on [SPECIFIC ASPECT], not [OTHER ASPECTS].”
Traditional search can’t surface “interestingness.” This can.
5/ THE QUIZ SHOW FORMAT (Audio Overview)
Students love this. The AI hosts quiz each other and intentionally get answers wrong so corrections stick:
“A quiz show with two hosts. First host quizzes the second on [TOPIC]. 10 questions total. Mix of multiple choice and True/False.
The host gets answers wrong sometimes. The other corrects with right answers. Share results at the end.”
6/ MULTILINGUAL PODCAST HACK
Before official language support existed, users generated podcasts in Spanish, German, Japanese:
“This is the first international special episode of Deep Dive conducted entirely in [Language].
Special Instructions:
- Only [Language] for entire duration
- No English except to clarify unique terms”
7/ PRODUCT MANAGER PERSONA (Official Google)
Transforms documents into decision memos:
“Act as a Lead Product Manager reviewing internal documentation. Ruthlessly scan for actionable insights, ignoring fluff.
Synthesize into “Decision Memo” format:
- User Evidence: Direct quotes indicating user problems
- Feasibility Checks: Technical constraints mentioned
- Blind Spots: What’s missing from source text
Use bullets. If I ask vague questions, force me to clarify.”
8/ SCIENTIFIC RESEARCHER PERSONA (Official Google)
For academics who need methodology over conclusions:
“Act as research assistant for a senior scientist. Tone: strictly objective, formal, precise.
Assume advanced knowledge of [FIELD]. Don’t define standard terminology.
Focus on methodology, data integrity, and conflicting evidence.
Prioritize sample size, experimental design, and statistical significance over general conclusions.
Format with bolded sections:
- Key Findings
- Methodological Strengths/Weaknesses
- Contradictions”
9/ MIDDLE SCHOOL TEACHER PERSONA (Official Google)
Makes dense content accessible:
“Act as an engaging Middle School Teacher. Translate source documents into language a 7th grader understands.
Structure every response:
- The “tl;dr”: One sentence using simple words
- Analogy: Real-world metaphor for the concept
- Vocab List: 3 difficult words defined simply
For dense paragraphs, break into True or False quiz format.”
10/ LITERATURE REVIEW THEMES PROMPT
For researchers synthesizing multiple papers:
“From papers on [TOPIC], identify 5-10 most recurring themes.
For each theme provide:
- Short definition in your own words
- Which papers mention it (with citations)
- One sentence on how it’s treated (debated, assumed, tested)
Present as structured table.”
4 - ultrathink
as someone who’s been using it heavily for 9 months, here are my top tips to maximize its potential: getting started & configuration customize your status line - use /statusline to show your current model, git branch, and token usage. keeps you aware of what’s going on learn essential slash commands - /usage (rate limits), /chrome (browser), /mcp (tools), /stats (activity), /clear (fresh start). these are your power moves use claude. md for project context - create a claude. md file in your project root with commands, style guides, and setup instructions. claude pulls it in automatically create custom slash commands - turn your repetitive workflows into custom commands by adding markdown files to .claude/commands. automate the boring stuff configure allowed tools - use /permissions or edit .claude/settings.json to let claude use certain tools without asking every time. saves you a ton of back-and-forth install the gh cli - if you’re on github, grab the gh command-line tool. makes it way easier for claude to create issues and prs use terminal aliases - create alias c=‘claude’ so you’re not typing the full command constantly. small thing, big quality of life improvement prompting & interaction use voice input - talking is faster than typing. grab a local transcription tool and just talk to claude. it understands even with typos break down large problems - don’t throw a massive problem at claude. split it into smaller pieces and solve them one by one. works way better use the “think” keyword - when you want claude to really dig deep, use “think,” “think hard,” “think harder,” or “ultrathink” to give it more thinking time provide detailed specs - for bigger tasks, write a proper spec in markdown. more detail upfront means better results. worth the effort use @ to reference files - point claude to specific files with @ instead of just describing them. clearer and you get tab auto-completion interrupt and add context anytime - if claude’s going the wrong way or you think of something important, just type it. claude will pick it up and adjust on the fly workflow & best practices minimize context - start a fresh conversation for each new task. long chats with irrelevant stuff actually make claude perform worse plan before coding - hit shift+tab twice for planning mode. let claude map out the solution before writing code. saves so much time use git for version control - commit often. if claude messes up, just git restore and try again with a better prompt. no stress let claude handle git operations - ask claude to write commits and commit messages. it’s surprisingly good at it and you’ll get better messages always verify output - check what claude gives you. have it write tests, review changes, or create a draft pr. don’t ship blind use handoff documents for long tasks - for multi-session work, ask claude to write a handoff doc summarizing what it did, what worked, and what’s next. makes pickups way easier try test-driven development - have claude write failing tests first, then code to make them pass. powerful workflow that leads to better code advanced techniques use subagents for complex problems - for tricky research or investigation, tell claude to use subagents to verify details. keeps your main chat clean create feedback loops - for stubborn bugs, set up a loop where claude builds, runs, checks output, and tries again on its own. let it grind use tmux for interactive clis - when working with interactive command-line stuff, use tmux so claude can send commands and capture output clone and half-clone conversations - use clone to copy a conversation or half-clone to keep only the recent half. quick way to manage context juggle multiple sessions - run multiple claude instances in different tabs. focus on a few tasks at a time and switch between them. solid multitasking approach