Code Agent Skills

Reference for aitasks skills across supported code agents

aitasks provides code agent skills that automate the full task lifecycle. Claude Code is the source of truth (/aitask-*); Gemini CLI and OpenCode use the same slash-command style, while Codex CLI wrappers use $aitask-*.

Claude Code / Codex CLI / Gemini CLI / OpenCode (agent availability depends on installed wrappers).

Start here: /aitask-pick is the hub skill — it drives the full pick → plan → implement → review → archive lifecycle. Read that first, then branch based on use case: creation with /aitask-explore, batch/remote with /aitask-pickrem or /aitask-pickweb, review with /aitask-review and /aitask-qa.

Multi-agent support: Codex CLI and Gemini CLI wrappers are installed in .agents/skills/; OpenCode wrappers are installed in .opencode/skills/. Invoke skills with /aitask-pick, /aitask-create, etc. in Claude Code, Gemini CLI, and OpenCode, or with $aitask-pick, $aitask-create, etc. in Codex CLI. Run ait setup to install the wrappers detected for your agent. Interactive Codex skills require plan mode because request_user_input is only available there; OpenCode uses native skill and native ask, so this caveat does not apply there. However, if OpenCode is launched in plan mode, its read-only tool restriction may cause task locking to be skipped — see Known Issues.

Run from the project root. aitasks expects to be invoked from the directory containing .git/ — the root of your project’s git repository. All skills use relative paths (e.g., ./.aitask-scripts/aitask_ls.sh) and expect to start there. Launching an agent from a subdirectory can break path-based permissions and wrapper assumptions, and in Claude Code it will also trigger repeated permission prompts. Always cd there before launching your agent.

Skill Overview

Task Implementation

Core workflow skills for picking and implementing tasks.

SkillDescription
/aitask-pickThe central skill — select and implement the next task (planning, branching, implementation, archival)
/aitask-pickremAutonomous remote variant of /aitask-pick — zero interactive prompts, profile-driven
/aitask-pickwebSandboxed variant for Claude Code Web — local metadata storage, requires follow-up with /aitask-web-merge
/aitask-web-mergeMerge completed Claude Web branches to main and archive task data

Task Management

Create, organize, and wrap tasks.

SkillDescription
/aitask-createCreate tasks interactively via code agent prompts
/aitask-exploreExplore the codebase interactively, then create a task from findings
/aitask-foldIdentify and merge related tasks into a single task
/aitask-revertRevert changes associated with completed tasks — fully or partially
/aitask-wrapWrap uncommitted changes into an aitask with retroactive documentation

Contributions

Import external work and contribute changes back.

SkillDescription
/aitask-pr-importImport a pull request as an aitask with AI-powered analysis and implementation plan
/aitask-contributeTurn local changes into structured contribution issues for upstream repos
/aitask-contribution-reviewAnalyze contribution issues for duplicates and overlaps, then import as tasks

Code Review

Review code and manage review guides.

SkillDescription
/aitask-explainExplain files: functionality, usage examples, and code evolution traced through aitasks
/aitask-qaRun QA analysis on any task — discover tests, run them, identify gaps, and create follow-up test tasks
/aitask-reviewReview code using configurable review guides, then create tasks from findings
/aitask-reviewguide-classifyClassify a review guide by assigning metadata and finding similar guides
/aitask-reviewguide-mergeCompare two similar review guides and merge, split, or keep separate
/aitask-reviewguide-importImport external content as a review guide with proper metadata

Configuration & Reporting

Settings, statistics, and model management.

SkillDescription
/aitask-refresh-code-modelsResearch latest AI code agent models and update model configuration files
/aitask-statsView completion statistics
/aitask-changelogGenerate changelog entries from commits and plans
Verified ScoresHow skill satisfaction ratings accumulate into verified model scores

Next: Command Reference


/aitask-pick

Select and implement the next task — the central development skill

/aitask-pickrem

Pick and implement a task in remote/non-interactive mode — zero prompts, profile-driven

/aitask-pickweb

Pick and implement a task on Claude Code Web — sandboxed skill with local metadata storage

/aitask-web-merge

Merge completed Claude Web branches to main and archive task data

/aitask-explore

Explore the codebase interactively, then create a task from findings

/aitask-pr-import

Analyze a pull request and create an aitask with implementation plan

/aitask-contribute

Turn local changes into structured contribution issues for the aitasks framework or the current project repo

/aitask-contribution-review

Analyze contribution issues for duplicates and overlaps, then import as grouped or single tasks

/aitask-fold

Identify and merge related tasks into a single task

/aitask-revert

Revert changes associated with completed tasks — fully or partially

/aitask-create

Create a new task file interactively via code agent prompts

/aitask-wrap

Wrap uncommitted changes into an aitask with retroactive documentation and traceability

/aitask-stats

View task completion statistics via a code agent

/aitask-explain

Explain files: functionality, usage examples, and code evolution traced through aitasks

/aitask-refresh-code-models

Research latest AI code agent models and update model configuration files

/aitask-add-model

Register a known code-agent model in models_.json and optionally promote it to default

/aitask-changelog

Generate a changelog entry from commits and archived plans

/aitask-review

Review code using configurable review guides, then create tasks from findings

/aitask-qa

Run QA analysis on any task — discover tests, run them, identify gaps, and create follow-up test tasks

/aitask-reviewguide-classify

Classify a review guide by assigning metadata and finding similar guides

/aitask-reviewguide-merge

Compare two similar review guides and merge, split, or keep separate

/aitask-reviewguide-import

Import external content as a review guide with proper metadata

Verified Scores

How skill satisfaction ratings accumulate into verified model scores