Agent RulesAgent Rules
Builder
Options
Browse all rules by language and framework
Templates
Pre-built rule sets ready to use
Popular Rules
Top community-ranked rules leaderboard
GuidesAnalyzePricingContact
Builder
OptionsTemplatesPopular Rules
GuidesAnalyzePricingContact

Product

  • Builder
  • Templates
  • Browse Rules
  • My Library

Learn

  • What are AI Agent Rules?
  • Guides
  • FAQ
  • About

Resources

  • Terms
  • Privacy Policy
  • Pricing
  • Contact
  • DMCA Policy

Support

Help keep this project free.

Agent RulesAgent Rules Builder
© 2026 Aurora Algorithm Inc.
Back to Guides
advanced
maintenance

Agent Rules Best Practices: How to Write Rules That Actually Work

Advanced techniques for keeping your agent rules effective, concise, and maintainable — including what to remove, how to measure impact, and project-type templates.

Agent Rules Team2/18/20265 min read

A well-maintained rules file can cut the time spent correcting AI output by more than half. But as projects grow, rules files frequently become bloated, inconsistent, or stale — and stop working. Here's how to keep yours sharp.


The Goldilocks Problem: Not Too Short, Not Too Long

Context windows in 2025 AI models are enormous (Claude 3.7 supports 200k tokens, GPT-4o 128k). But size still matters: the more context you fill with generic advice, the less attention the model pays to your genuinely project-specific rules.

A well-crafted 400–800 word rules file consistently outperforms a 3,000-word file padded with general best practices.

Remove rules like these — the AI already does them by default:

  • "Write readable code"
  • "Use meaningful variable names"
  • "Handle edge cases"
  • "Follow best practices"

Keep rules like these — genuinely project-specific:

  • "Use date-fns for date formatting, not moment or dayjs"
  • "Import from @/lib/logger — never use console.log in production code"
  • "Use TanStack Query v5 useQuery — not the old v4 useQuery signature"

The Five Anti-Patterns That Kill Rule Effectiveness

1. Vague rules

Bad: "Write good tests" Good: "Write unit tests with Vitest using the Arrange-Act-Assert pattern. Mock external services with vi.mock(). Every public function needs at least a happy path and an error case test."

The test: could a competent developer unfamiliar with your project implement this rule correctly? If not, it needs more specificity.

2. Contradictory rules

Conflicting instructions cause the model to pick one arbitrarily, often the wrong one. Example:

"Always use Server Components" (Section 1)

tsx
// BadExample.tsx — client component shown without 'use client' directive

Audit your rules periodically for contradictions, especially after adding rules to fix specific incidents.

3. Stale rules

Referencing deprecated APIs, old library versions, or removed patterns actively causes the AI to produce broken code. Stale rules are worse than no rules.

Set a calendar reminder to review your rules file when you:

  • Upgrade a major dependency
  • Refactor an architectural pattern
  • Remove a library from the project

4. Tool duplication

If your linter or formatter already enforces a rule, don't put it in your agent rules. The AI doesn't need to know Prettier will auto-format — it needs to know your business logic conventions.

5. Instruction overload

More than ~10 rules in a single category causes attention degradation. Group thematically and keep each section under 10 bullet points. Use headers to help the model navigate.


Measuring Effectiveness

How do you know if your rules are actually working? Track four signals:

SignalTargetHow to measure
Correction rateDecreasing over timeCount manual edits per AI session
Output consistencySame rules → same outputCompare two team members' AI output for the same prompt
Import accuracyNo wrong import pathsPR review scan
Pattern adherenceFollows project conventionsCode review checklist

If correction rate isn't decreasing, your rules aren't capturing the actual mistakes. Read your corrections — each one is a missing rule.


Rule Templates by Project Type

Frontend App (React/Next.js)

markdown
- Framework: Next.js [version] App Router
- Component pattern: Functional components, named exports, FC type
- State: [Zustand/Jotai/Redux] — explain pattern
- Styling: Tailwind CSS — use cn() for conditionals
- Data fetching: [TanStack Query/Server Components/SWR]
- Routing: App Router only — no Pages Router patterns

Backend API (Node.js)

markdown
- Framework: [Express/Fastify/Hono] [version]
- Validation: Zod at all API boundaries
- Error response format: { error: { code, message }, status }
- Auth: [JWT/session] — explain middleware chain
- ORM: [Drizzle/Prisma] [version] — explain query pattern
- Testing: [Vitest/Jest] with supertest for integration tests

Full-Stack App

Combine both, plus:

markdown
- Client/server boundary: 'use client' required for any browser API
- Data mutations: Server Actions from server/ directory
- Shared types: packages/types or lib/types — import from there
- Never import server-only modules in client components

Keeping Rules Fresh

Treat your rules file with the same care as critical documentation:

  1. Review on every major dependency upgrade — especially framework upgrades
  2. Add rules immediately after correcting the AI — while the context is fresh
  3. Archive rules for removed features — comment them out with a date, don't delete immediately
  4. Version your rules in Git — review rule changes in PRs as you would any other code
  5. Keep a changelog header — one line per major change makes audits easy:
markdown
<!-- AI Rules Changelog
2026-02: Updated for Next.js 15 App Router, removed class component patterns
2025-11: Added Drizzle ORM patterns, removed Prisma references
2025-08: Initial rules file
-->
Previous
How to Write AI Coding Rules for Cursor, Claude Code, and Copilot
Next
How to Set Up AI Coding Rules for Teams (Cursor, Claude Code, Copilot)

Explore Rules by Language

TypeScript Agent RulesPython Agent RulesGo Agent RulesRust Agent RulesJavaScript Agent RulesJava Agent Rules→ All Languages