Link to Report: https://dora.dev/research/2025/dora-report/#download-the-2025-dora-report
AI as Development Partner
- AI models read repositories, reason over architecture, propose code changes, and validate outputs.
- Integration with CI/CD and static analysis tools enables end-to-end automation.
- Practitioners should focus on orchestration – linking retrieval, validation and governance layers.
Core Architecture Stack
| Layer | Purpose | Tips |
|---|---|---|
| Retrieval-Augmented Generation (RAG) | Ground AI responses in repository context | Index embeddings for code, docs and tests. |
| Toolformer Integration (Language Models Can Teach Themselves to Use Tools) | Allow models to invoke build, test, or lint tools | Define clear tool APIs and enforce structured outputs |
| Guardrails | Enforce safety, compliance, and output quality | Block secrets and unsafe commands automatically |
| Feedback Loop | Learn from developer edits | Collect feedback directly in IDE or review flow |
Measuring Real Impact
Stop chasing accuracy scores. Evaluate:
- Context relevance: How code fits the repo style
- Semantic fidelity: How logic aligns with intended behaviour
- Maintenance impact: How AI output reduces long-term complexity
Combine tests, static analysis, and labeled feedback for continuous benchmarking.
Governance Is Non-Optional
Treat models like any other part of the software supply chain:
- Maintain model cards with provenance and known issues.
- Run red-team tests for data leaks and injection.
- Enforce sandboxed execution for generated code.
- Require human review for AI-authored diffs
Team Maturity Levels
| Level | Description |
|---|---|
| Copilot | Manual AI suggestions |
| Autonomous | Context-aware code proposals |
| Collaborative | Multi-agent workflows (plan, code, test) |
Track efficiency via Mean Time to Correct Suggestion (MTCS) on how quickly AI improves from feedback.
Example
This is an example for a claude.md rule
- **Exceptions**: Never throw exceptions that are caught locally—use control flow instead
Bad example:
function getUser(id) {
try {
if (!id) throw new Error('Missing ID');
return db.findUser(id);
} catch (e) {
return null;
}
}
The exception is thrown and caught in the same function - wasting performance and obscuring control flow.
Good example:
function getUser(id) {
if (!id) return null;
return db.findUser(id);
}
Simplifies logic, avoids unnecessary try/catch overhead, and makes failure paths explicit.
When try/catch is correct
// External system error
try {
const data = fs.readFileSync('config.json');
return JSON.parse(data);
} catch (err) {
log.error('Failed to read config:', err);
return defaultConfig;
}
// API call failure
try {
const result = await externalApi.fetch();
return result.data;
} catch (err) {
retryRequest(err);
}
Here try/catch handles I/O or network errors, not logic flow.