Code Review Intelligence

Every changeset submitted through dkod is automatically reviewed for bugs, security issues, architecture violations, and code quality — before it reaches your repository.


How It Works

dkod provides two tiers of code review that work together:

Local review runs automatically on every dk_submit. It analyzes the changeset using dkod's built-in AST analysis — checking for test gaps, unused code, naming conventions, security patterns, and architectural concerns. Results are returned inline with the submit response. No configuration needed.

Deep review runs asynchronously after submit when you configure an LLM API key. It sends the changeset to an AI model (Anthropic Claude or OpenRouter) for a thorough analysis covering logic errors, edge cases, performance implications, and design patterns. Results arrive via dk_watch events.

Review Scores

Every review produces a score from 1 to 5:

ScoreMeaningAction
5Excellent — no findingsMerge confidently
4Good — minor suggestions onlyReview optional
3Acceptable — warnings presentReview recommended
2Needs work — significant issuesFix before merging
1Critical — blocking problemsMust fix

Inline Review on Submit

Every dk_submit response now includes a review_summary field with the local review results:

{
  "status": "accepted",
  "changeset_id": "cs_abc123",
  "review_summary": {
    "tier": "local",
    "score": 4,
    "findings_count": 1,
    "top_severity": "warning"
  }
}

No separate tool call needed for the summary. The full review data — with individual findings, file locations, and suggestions — is available via dk_review.

Using dk_review

The dk_review MCP tool returns the complete review findings for any changeset.

Parameters:

ParameterTypeRequiredDescription
session_idstringNoRequired when multiple sessions are active
changeset_idstringNoDefaults to the current session's changeset

Example response:

## LOCAL Review — Score: 3/5

- **WARNING** `TEST_GAP` rust/src/auth.rs (70% confidence)
  No test file covers rust/src/auth.rs.
  → Add or update tests alongside source changes.

- **WARNING** `NAMING_CONVENTION` rust/src/auth.rs:42 (85% confidence)
  Function `doAuth` doesn't follow snake_case convention.
  → Rename to `do_auth`.

## DEEP Review — Score: 5/5

No findings.

Each finding includes:

FieldDescription
Severityerror, warning, or info
CategoryTEST_GAP, SECURITY, ARCHITECTURE, NAMING_CONVENTION, DEAD_CODE, COMPLEXITY, etc.
File and lineWhere the issue was found
ConfidenceHow certain the reviewer is (percentage)
MessageWhat the issue is
SuggestionHow to fix it (prefixed with →)

Setting Up Deep Review

Deep review requires an LLM API key. Configure it in the dkod dashboard:

  1. Go to app.dkod.io/settings
  2. Navigate to AI Code Review
  3. Add your API key for one or both providers:
    • Anthropic — uses Claude for deep analysis
    • OpenRouter — access to multiple models

Keys are encrypted at rest and never exposed via the API.

Once configured, deep review runs automatically after every dk_submit. Results arrive asynchronously.

Review Events via dk_watch

Both local and deep review completions are delivered as watch events. Subscribe with dk_watch to receive them in real time:

{
  "event_type": "changeset.review.completed",
  "changeset_id": "cs_abc123",
  "data": {
    "tier": "deep",
    "score": 4,
    "findings_count": 2,
    "top_severity": "warning"
  }
}

Event types:

  • changeset.review.completed — a review tier finished (local or deep)

The tier field distinguishes local from deep reviews. Use this to trigger follow-up actions — for example, an orchestrator agent can wait for the deep review before approving.

Dismissing Findings

Not every finding requires action. Dismiss false positives or accepted risks through the API:

POST /api/repos/{name}/changesets/{number}/review/dismiss/{finding_id}

Dismissed findings are excluded from the score recalculation and won't reappear on subsequent reviews of the same changeset.

Review in the Harness

The dkod harness integrates code review into the orchestrator's pipeline. Between dk_verify and dk_approve, the orchestrator calls dk_review to check the changeset quality:

  • Score ≥ 3 and no error findings — the orchestrator proceeds to dk_approve
  • Score < 3 or error findings present — the orchestrator re-dispatches the generator agent with the review findings, giving it specific instructions on what to fix

The evaluator agent also checks dk_review findings alongside dk_verify results, ensuring both structural correctness (tests pass, types check) and code quality (no security issues, proper patterns) before a changeset is approved.

This creates a fix loop: generate → submit → verify → review → fix → resubmit, until both verification and review pass.

Insights Dashboard

The Insights page at app.dkod.io includes a Code Reviews tab with:

  • Total reviews — count of local and deep reviews
  • Average score — trending score over time
  • Coverage — percentage of changesets reviewed
  • Findings — total count with dismissed breakdown
  • Reviews over time — time-series chart
  • Average score over time — trend line
  • Findings by severity over time — stacked area chart (error, warning, info)
  • Top finding categories — ranked bar chart (Architecture, Security, Convention, etc.)

Review metrics are also available via the Insights API:

GET /api/insights

The response includes a review field with summary stats, time series data, and top finding categories.

Next Steps