Add a New Feature
A client feature request lands in your project management tool on Monday morning. By Friday it needs to be in staging. That’s the rhythm of outsourcing — clear requirements, tight windows, no room for exploratory coding. Sun Agent Kit structures the entire feature development cycle: plan the approach against the real codebase, implement in small verified steps, test it, and get a review before it hits the PR.
Overview
Goal: Deliver a client-requested feature from ticket to reviewed, tested, mergeable code
Time: 30–120 minutes (vs 4–16 hours manual)
Agents used: planner, implementer, tester, reviewer
Commands: /sk:plan, /sk:cook, /sk:test, /sk:code-review
Prerequisites
- Sun Agent Kit installed with the project previously indexed
- A clear feature description — ticket text, acceptance criteria, or a client message works
- The project compiles and existing tests pass before you start (run
npm testto verify) - A feature branch checked out (
git checkout -b feature/your-feature-name)
Step-by-Step Workflow
Step 1: Plan the Feature Against the Real Codebase
/sk:plan "add a dark mode toggle to the user dashboard — persists the preference in the database, applies via a CSS class on the root element, respects the system OS preference on first visit"
What happens: The agent:
- Analyzes the codebase to find relevant components, hooks, endpoints, and database tables
- Writes an implementation plan with ordered steps (DB migration, API, hook, UI component)
- Identifies files that import affected modules and flags them for audit
- Saves the plan to
plans/for review before implementation
Step 2: Implement the Feature
/sk:cook "implement dark mode toggle per the plan in plans/phase-dark-mode.md"
What happens: The agent follows the plan step by step:
- Creates a database migration to add the new column
- Updates the API endpoint to accept and persist the new field
- Extends the frontend hook to support all modes and audits dependent components
- Creates the UI component and integrates it into the layout
- Runs typecheck to verify zero compilation errors
Step 3: Run the Test Suite
/sk:test "test the dark mode feature: theme persistence, API endpoint, hook behavior, and the system preference detection"
What happens: The agent:
- Writes test cases covering the API endpoint, hook behavior, and UI component
- Runs the new tests alongside the existing test suite to check for regressions
- Reports coverage for the changed files
Step 4: Request a Code Review
/sk:code-review "review the dark mode implementation for code quality, edge cases, and anything I should fix before the PR"
What happens: The agent:
- Reviews each changed file for logic correctness, security, and edge cases
- Flags potential issues (e.g., FOUC on server-side render, missing loading states)
- Saves a review report to
plans/reports/with categorized findings
Step 5: Apply Review Feedback and Ship
/sk:cook "apply the 2 review fixes: prevent flash of unstyled content on initial render, add loading state to the theme toggle during the PATCH request"
What happens: The agent applies each fix from the review report, then re-runs the test suite to confirm no regressions.
Complete Example: Client Wants Dark Mode Toggle for Their Dashboard
Scenario: The client’s design team sends a Figma link and a ticket: “Users have requested dark mode. Toggle in the top navbar, persists to their profile, defaults to system preference.” The ticket is due in staging by Thursday. It’s Tuesday.
Tuesday morning — full feature cycle:
# Step 1: Understand what's already there before writing anything
/sk:scout "what theming or CSS variable infrastructure already exists in this project? Any partial dark mode work? Check the design system, globals.css, and any theme context."
# Step 2: Plan with that context in mind
/sk:plan "dark mode toggle: DB persistence to user_preferences, CSS variable-based theming, system preference default, toggle in DashboardLayout navbar, smooth transition on switch"
# Step 3: Implement per the plan
/sk:cook "implement dark mode per plans/phase-dark-mode.md — follow the plan exactly, don't add anything not in the plan"
# Step 4: Test thoroughly — client will QA this directly
/sk:test "full test coverage for dark mode: API, hook, component, SSR flash prevention, accessibility"
# Step 5: Review before the PR
/sk:code-review "review dark mode implementation — focus on edge cases: users with no preference set, users switching from system to explicit, the SSR hydration path"
# Step 6: Commit and push
/sk:git cm
Result: Feature is in staging Wednesday afternoon, one day ahead of the deadline. The client reviews it Thursday morning and approves. Manual approach estimate: 2–3 days.
Time Comparison
| Phase | Manual | With Sun Agent Kit |
|---|---|---|
| Codebase analysis before starting | 1–2 hours | minutes |
| Implementation plan | 30–60 min | minutes |
| Core implementation | 2–6 hours | 20–60 min |
| Writing tests | 1–2 hours | minutes |
| Code review + fix cycle | 1–2 hours | minutes |
| Total | 4–16 hours | 30–120 minutes |
Best Practices
1. Always run /sk:scout before /sk:plan for unfamiliar areas ✅
The plan is only as good as the context. If you’re adding to a module you haven’t touched, scout it first. A 5-minute scout prevents an hour of rework when the plan doesn’t account for existing infrastructure.
2. Keep the cook prompt scoped to the plan ✅
Tell /sk:cook to follow the plan file. If you describe the feature verbally instead, it may make different architectural decisions than the plan prescribed — leading to drift between the plan and the code.
3. Fix review feedback before opening the PR ✅
The reviewer agent’s output is a pre-PR gate, not optional commentary. Fix the flagged issues even if they seem minor — “non-blocking” issues that reach a client review look like incomplete work.
4. Letting /sk:cook add unrequested extras ❌
If the plan says “add a theme toggle” and cook produces a full theming system with 6 new files, run /sk:cook "implement only what is in the plan, nothing extra". Scope creep in implementation creates scope creep in review and testing.
5. Running /sk:test after the review fixes ❌ (skipping this step)
Review fixes change code. Changed code can break tests. Always run /sk:test after applying review feedback, even if the fixes look trivial.
Troubleshooting
Problem: /sk:cook diverges from the plan and adds extra scope
Solution: Reference the plan file explicitly and be specific: /sk:cook "implement only the steps in plans/phase-dark-mode.md step 2 — nothing else".
Problem: /sk:test writes tests that mock everything and test nothing meaningful
Solution: Guide the test agent: /sk:test "write integration tests that test the actual database write and read for theme preference — no mocking the DB layer".
Problem: /sk:code-review flags an issue but doesn’t suggest how to fix it
Solution: Ask directly: /sk:ask "how should I fix the FOUC issue flagged in plans/reports/review-dark-mode.md — what's the standard pattern for this in Next.js?".
Problem: Feature works locally but breaks in the CI build
Solution: Run /sk:fix "CI build failing after dark mode feature — error: [paste error]". The agent handles environment differences between local and CI.
Next Steps
- Fix Bugs Systematically — for when a feature introduces regressions that need structured investigation
- Build a REST API — when the feature requires a new API layer to be built first
- Implement Authentication — for features that depend on user identity and authorization
Key takeaway: Features don’t slip deadlines because engineers are slow — they slip because unplanned implementations hit unexpected complexity. /sk:plan eliminates surprises before a single line is written.