Subagents, Skills, and Hooks: Structuring Claude Code for a Real Engineering Team
Claude Code becomes dramatically more useful when you configure it properly for your team. Here's how I use subagents, skills, and hooks to turn it from a single-player tool into a team-level force multiplier.
Most teams using Claude Code are using maybe 30% of what it can do. They open it in a project, type a task, get a result, close it. That works, and it is better than not using it, but it misses the features that turn the tool from a single-player assistant into a team-level multiplier. The three I install first on every engagement are subagents, skills, and hooks. This article is the operational version of each — what they are, when I reach for them, and the specific patterns that compound.
If you have read a fractional CTO's Claude Code playbook, this is the layer that goes on top. The playbook is how to drive the tool well. This article is how to configure it so that every member of your team drives it well without having to re-learn the same lessons.
The three features, briefly
Subagents are delegated Claude instances that Claude Code can spawn to handle part of a task in isolation. The parent Claude gives the subagent a scoped job, the subagent works on it with its own context window, and the result comes back to the parent. Subagents are the way you handle tasks that would otherwise blow out the main context.
Skills are reusable, packaged sets of instructions that Claude can load when a task matches the skill's description. A skill is a folder with a SKILL.md file that tells Claude what to do for that kind of task, plus any scripts or assets the skill needs. Skills are the way you codify your team's specific practices into something Claude will follow consistently.
Hooks are scripts that run at specific points in Claude Code's lifecycle — before a tool call, after a tool call, when a session starts, when a session ends. Hooks are the way you enforce policies (block destructive commands), gather telemetry (log every tool use), or automate routine steps (run the linter after every edit).
Each of these is valuable on its own. The combination is where the leverage lives.
Subagents: when and how
The main reason to use a subagent is context management. Claude Code has a generous context window, but any non-trivial task can burn through it if the model has to read dozens of files, run multiple exploratory commands, and track the results. Subagents let the parent delegate "go figure out X" to a child, get back a summary, and preserve the parent's context for the main task.
Concrete patterns I use constantly:
Codebase exploration. "Subagent: find every place in the codebase where we handle authentication and give me a short summary of each." The subagent does the grep, reads the files, and returns a half-page summary. The parent continues the main task with the summary in context instead of the twenty files.
Test diagnosis. "Subagent: this test is failing. Figure out why and report back." The subagent runs the test, reads the stack trace, reads the relevant files, and returns an analysis. The parent then decides what to do without having to hold the debug trail in context.
Parallel work. When a task naturally splits into independent parts, spawn multiple subagents in parallel. "Subagent A: update the backend route. Subagent B: update the frontend call. Subagent C: write the integration test." Each gets its own context, they run in parallel, and the parent coordinates.
Verification. After the main Claude makes a change, spawn a subagent to check it. "Subagent: review this diff against the test suite and report any issues." This is a cheap way to add a second pass without the parent having to switch mental modes.
The rule of thumb: if a task would require the main Claude to read more than five or six files just to understand the context, it is probably a subagent job.
Skills: codifying team practice
A skill is a folder with a SKILL.md file, and optionally additional assets (scripts, templates, reference docs). The SKILL.md tells Claude what the skill is for and how to execute it. When Claude encounters a task that matches the skill's description, it loads the skill and follows the instructions.
The value is consistency. Without skills, every engineer on the team has to individually tell Claude how to do team-specific things. With skills, the instructions are written once and followed everywhere.
Skills I install at nearly every engagement:
PR creation skill. A SKILL.md that encodes the team's PR conventions: title format, description template, required checklist items, labels to apply, reviewers to request. When an engineer says "make a PR for this," Claude loads the skill and produces a PR that matches the team's standards.
Code style skill. "When editing files in this codebase, follow these conventions: [list]." Naming rules, import ordering, error handling patterns, comment style. Saves every engineer from having to re-explain these to Claude and prevents drift across authors.
Deployment skill. A skill that walks Claude through the team's deployment process: check CI, tag the release, run the migration, monitor the dashboards. For routine deployments, the engineer says "deploy" and Claude handles the checklist with appropriate pauses for human confirmation.
Incident response skill. When Claude is helping on an incident, it follows a specific checklist: capture the current state, check the relevant dashboards, identify the last known good version, propose a rollback, do not modify state without explicit confirmation.
Onboarding skill. "When a new engineer asks about the codebase, walk them through the architecture in this specific order, point them at these specific files, explain the conventions in this specific way." Gives new hires a consistent tour regardless of which engineer is pairing with them.
A good skill is tight, specific, and matches a real recurring need. A bad skill is a dumping ground of general advice. If a skill is longer than a page or two, it probably needs to be split.
Hooks: enforcing the non-negotiables
Hooks are scripts that Claude Code runs at specific lifecycle events. The main ones I use:
Pre-tool hook: block destructive commands. Before Claude runs a shell command, a hook inspects it. If it matches a destructive pattern (rm -rf, git reset --hard, DROP, DELETE FROM without a WHERE), the hook blocks it and forces Claude to ask for explicit confirmation. This has prevented real accidents on real client repos.
Pre-tool hook: require branch check. Before Claude writes to the filesystem, a hook verifies the current git branch is not main. If it is, the hook blocks the write and tells Claude to create a feature branch first. This makes the "never push to main" guardrail enforceable instead of aspirational.
Post-tool hook: run the linter. After Claude edits a file, a hook runs the project's linter on the changed file. If the linter fails, Claude sees the output and fixes the issue in the next turn. The team's style standards become something Claude enforces on itself.
Post-tool hook: run the tests. After Claude finishes a set of changes, a hook runs the relevant test suite. If tests fail, Claude sees them and iterates. This closes the loop before the human reviewer gets involved.
Session-start hook: load project context. When a new Claude Code session starts, a hook loads the relevant context files, pulls the latest from main, and summarizes recent changes. The engineer drops into a session that already knows the current state.
Session-end hook: log the work. When a session ends, a hook writes a summary of what happened — tools used, files changed, cost consumed — to a team log. Over time, this log is the data you use to improve your practices.
Hooks are the least glamorous of the three features and they are often the highest leverage. They turn policy into enforcement. A team rule that lives only in a doc is a suggestion; a team rule that runs as a hook is a guarantee.
Putting them together
The combination of the three is where the compounding happens. A concrete example:
A team uses a deployment skill that walks Claude through the release process. The skill says "spawn a subagent to run the full test suite in parallel while Claude checks the dashboards." The session has hooks installed: pre-tool hooks block any destructive action without confirmation, post-tool hooks log every action to the audit trail, and a session-end hook posts a summary to the team's Slack channel.
The result: any engineer on the team can invoke the deployment skill and Claude produces a consistent, safe, auditable deployment. The skill encodes the team's knowledge, the subagent provides the parallelism, and the hooks provide the safety and the trail. No engineer had to remember how to do any of it — the tool remembered.
This is the level at which Claude Code becomes a team tool instead of a personal one.
The mistakes I see
Skipping this layer entirely. The most common mistake is using Claude Code without any team-level configuration. Every engineer drives it their own way. The team gets a lot of the benefit but leaves the compounding on the table.
Too many skills. I have seen teams install twenty skills, most of which Claude never loads because the descriptions are vague or overlapping. Better to have three good skills than twenty mediocre ones.
Hooks that do too much. A hook that runs a 90-second test suite after every file edit makes the tool unusably slow. Keep hooks fast. If something takes more than a few seconds, move it to the end of the task instead of every tool call.
Subagents for trivial work. Spawning a subagent for a task the main Claude could do in one turn is pure overhead. Subagents are for context management, not for cleverness.
No versioning of the configuration. The skills, hooks, and CLAUDE.md for a project should be checked into the repo and reviewed like any other code. Unversioned configuration is a team-level regression waiting to happen.
The weekly rhythm
I ask teams to do a fifteen-minute weekly review of their Claude Code configuration. Pick one skill that fired this week and see whether it did the right thing. Pick one hook and see whether it blocked anything legitimate. Add one new skill for a pattern that came up twice this week. Remove one skill that never fires. Over a few months, the configuration converges to something that reflects the team's actual practice, not a theoretical best practice.
Counterpoint: do not over-engineer the configuration
A warning. I have watched teams spend more time configuring Claude Code than using it. The skills and hooks layer is meant to encode practices you already have, not to create new complexity for its own sake. Start with two skills and three hooks. Add more only when a specific recurring friction justifies it.
Your next step
This week, install one skill and one hook. For the skill, pick the most frequently repeated instruction you give Claude in your project and encode it. For the hook, pick the most dangerous operation you want to prevent and block it. That is enough to start compounding.
Where I come in
Configuring Claude Code well at the team level — skills, hooks, subagent patterns, CLAUDE.md — is a common 1-week engagement at the start of an AI-native dev stack adoption. By the end, the team has a configuration that encodes their practices and compounds over time. Book a call if your team is using Claude Code but has not yet invested in the team-level configuration layer.
Related reading: A Fractional CTO's Claude Code Playbook · CLAUDE.md Is the New README · The 2026 AI-Native Dev Stack
Want Claude Code configured for your team? Book a call.
Get in touch →