Setting Up Agent Rules for Team Projects
A rules file used by one developer makes their AI better. A rules file committed to the repo and used by the whole team makes the entire team's AI consistent — eliminating one of the biggest sources of style drift in AI-assisted codebases.
Why Team Rules Matter
Without shared rules, every developer gets different AI behavior. One person's AI generates class components while another gets functional components. One uses axios, another uses fetch, another uses ky. Shared agent rules eliminate this inconsistency and make code review easier — generated code already follows your patterns before it's reviewed.
Step 1: Commit Rules to Version Control
Your rules file should live in the repository alongside your code:
bash# Stage all AI tool rule files in one commit git add .cursorrules CLAUDE.md .github/copilot-instructions.md .windsurfrules \ .clinerules AGENTS.md GEMINI.md .agent/rules/rules.md git commit -m "chore: add shared AI agent rules"
This ensures:
- Every team member has the same rules automatically on
git clone - Rules changes are tracked in Git history with attribution
- New hires inherit constraints from day one
- Rule changes can be reviewed in PRs with full context
Step 2: Start from Existing Conventions, Not New Ones
The #1 mistake teams make: using the rules file to introduce new conventions instead of codifying existing ones. This creates friction instead of reducing it.
Start here (facts, not opinions):
- Tech stack with exact versions
- File structure your team already uses
- Import patterns already in the codebase
- Error handling you've already standardized
- Testing framework and patterns you've already adopted
Add later (after team alignment):
- New patterns you're trying to establish
- Architecture decisions still being debated
Step 3: Separate Shared Rules from Personal Preferences
Some rules belong to the repo; others belong to individual developers.
Shared (commit to Git):
- Tech stack, dependencies, versions
- Project architecture and file structure
- Code style your linter doesn't already enforce
- Testing requirements
- Security rules
Personal (local override, gitignored):
- Verbosity level ("explain your changes briefly")
- Response format ("use bullet points")
- Editor-specific behavior
How to implement local overrides:
bash# Cursor: create a local - only rule file mkdir - p.cursor / rules echo "--- alwaysApply: false ---" > .cursor / rules / personal.mdc echo ".cursor/rules/personal.mdc" >> .gitignore # Claude: use the global user config code ~/.claude/CLAUDE.md # applies to all projects ``` ## Step 4: Make Rule Changes a PR Process Treat rules files like code — they require review and justification: **Good PR description for a rule change:** > "Adding rule to always use Server Components by default. We've been inconsistent across the codebase — 40% of new components are incorrectly adding 'use client' when not needed. This rule prevents that." **PR checklist for rule changes:** - [ ] Does the rule fix a real, recurring problem? - [ ] Is it specific enough to be unambiguous? - [ ] Does it conflict with any existing rules? - [ ] Has the AI output been tested with the new rule? ## Step 5: Onboard New Team Members with Rules Include agent rules in onboarding documentation: 1. Link to the rules file in your README or `CONTRIBUTING.md` 2. Explain which AI tools the team uses and which rules files they read 3. Show how to set up personal local overrides 4. Include agent rules review in the first week's checklist ## Handling Disagreements on Rules When team members disagree: | Situation | Resolution | |-----------|------------| | Personal preference | Implement as a local override, not shared rule | | Inconsistency in existing codebase | Default to the majority pattern | | Genuinely disputed practice | Add the rule with a comment explaining the decision | | Conflicting rules already in place | Fix the contradiction — don't add more | The decision test: "Would a new hire understand why this rule exists?" If yes, it belongs. If it needs a long explanation, it might not be a rule — it might be architecture documentation. ## Measuring Team-Wide Consistency After rolling out shared rules, measure: - **PR review time**: Fewer style comments means rules are working - **Correction rate**: Track how often each developer corrects AI output - **Pattern adherence**: Run a monthly check — does AI-generated code match existing patterns? If different team members are still seeing inconsistent AI behavior for the same prompts, the rules need strengthening in the inconsistent area.