Code Review Intelligence
Every changeset submitted through dkod is automatically reviewed for bugs, security issues, architecture violations, and code quality — before it reaches your repository.
How It Works
dkod provides two tiers of code review that work together:
Local review runs automatically on every dk_submit. It analyzes the changeset using dkod's built-in AST analysis — checking for test gaps, unused code, naming conventions, security patterns, and architectural concerns. Results are returned inline with the submit response. No configuration needed.
Deep review runs asynchronously after submit when you configure an LLM API key. It sends the changeset to an AI model (Anthropic Claude or OpenRouter) for a thorough analysis covering logic errors, edge cases, performance implications, and design patterns. Results arrive via dk_watch events.
Review Scores
Every review produces a score from 1 to 5:
| Score | Meaning | Action |
|---|---|---|
| 5 | Excellent — no findings | Merge confidently |
| 4 | Good — minor suggestions only | Review optional |
| 3 | Acceptable — warnings present | Review recommended |
| 2 | Needs work — significant issues | Fix before merging |
| 1 | Critical — blocking problems | Must fix |
Inline Review on Submit
Every dk_submit response now includes a review_summary field with the local review results:
{
"status": "accepted",
"changeset_id": "cs_abc123",
"review_summary": {
"tier": "local",
"score": 4,
"findings_count": 1,
"top_severity": "warning"
}
}No separate tool call needed for the summary. The full review data — with individual findings, file locations, and suggestions — is available via dk_review.
Using dk_review
The dk_review MCP tool returns the complete review findings for any changeset.
Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
session_id | string | No | Required when multiple sessions are active |
changeset_id | string | No | Defaults to the current session's changeset |
Example response:
## LOCAL Review — Score: 3/5
- **WARNING** `TEST_GAP` rust/src/auth.rs (70% confidence)
No test file covers rust/src/auth.rs.
→ Add or update tests alongside source changes.
- **WARNING** `NAMING_CONVENTION` rust/src/auth.rs:42 (85% confidence)
Function `doAuth` doesn't follow snake_case convention.
→ Rename to `do_auth`.
## DEEP Review — Score: 5/5
No findings.Each finding includes:
| Field | Description |
|---|---|
| Severity | error, warning, or info |
| Category | TEST_GAP, SECURITY, ARCHITECTURE, NAMING_CONVENTION, DEAD_CODE, COMPLEXITY, etc. |
| File and line | Where the issue was found |
| Confidence | How certain the reviewer is (percentage) |
| Message | What the issue is |
| Suggestion | How to fix it (prefixed with →) |
Setting Up Deep Review
Deep review requires an LLM API key. Configure it in the dkod dashboard:
- Go to app.dkod.io/settings
- Navigate to AI Code Review
- Add your API key for one or both providers:
- Anthropic — uses Claude for deep analysis
- OpenRouter — access to multiple models
Keys are encrypted at rest and never exposed via the API.
Once configured, deep review runs automatically after every dk_submit. Results arrive asynchronously.
Review Events via dk_watch
Both local and deep review completions are delivered as watch events. Subscribe with dk_watch to receive them in real time:
{
"event_type": "changeset.review.completed",
"changeset_id": "cs_abc123",
"data": {
"tier": "deep",
"score": 4,
"findings_count": 2,
"top_severity": "warning"
}
}Event types:
changeset.review.completed— a review tier finished (local or deep)
The tier field distinguishes local from deep reviews. Use this to trigger follow-up actions — for example, an orchestrator agent can wait for the deep review before approving.
Dismissing Findings
Not every finding requires action. Dismiss false positives or accepted risks through the API:
POST /api/repos/{name}/changesets/{number}/review/dismiss/{finding_id}Dismissed findings are excluded from the score recalculation and won't reappear on subsequent reviews of the same changeset.
Review in the Harness
The dkod harness integrates code review into the orchestrator's pipeline. Between dk_verify and dk_approve, the orchestrator calls dk_review to check the changeset quality:
- Score ≥ 3 and no error findings — the orchestrator proceeds to
dk_approve - Score < 3 or error findings present — the orchestrator re-dispatches the generator agent with the review findings, giving it specific instructions on what to fix
The evaluator agent also checks dk_review findings alongside dk_verify results, ensuring both structural correctness (tests pass, types check) and code quality (no security issues, proper patterns) before a changeset is approved.
This creates a fix loop: generate → submit → verify → review → fix → resubmit, until both verification and review pass.
Insights Dashboard
The Insights page at app.dkod.io includes a Code Reviews tab with:
- Total reviews — count of local and deep reviews
- Average score — trending score over time
- Coverage — percentage of changesets reviewed
- Findings — total count with dismissed breakdown
- Reviews over time — time-series chart
- Average score over time — trend line
- Findings by severity over time — stacked area chart (error, warning, info)
- Top finding categories — ranked bar chart (Architecture, Security, Convention, etc.)
Review metrics are also available via the Insights API:
GET /api/insightsThe response includes a review field with summary stats, time series data, and top finding categories.
Next Steps
- Verification Pipeline — the quality gates that run before review
- Multi-Agent Workflows — how review fits into orchestrated pipelines
- Agent Skill — install the skill that enables parallel execution with review
- SDK Reference — full MCP tool reference including dk_review