Key Takeaway: AI-Assisted Software Development Report 2025

The 2025 State of AI-Assisted Software Development Report highlights how large models now operate across the entire engineering lifecycle: design, implementation, testing, and maintenance.

Link to Report: https://dora.dev/research/2025/dora-report/#download-the-2025-dora-report

AI as Development Partner

  • AI models read repositories, reason over architecture, propose code changes, and validate outputs.
  • Integration with CI/CD and static analysis tools enables end-to-end automation.
  • Practitioners should focus on orchestration – linking retrieval, validation and governance layers.

Core Architecture Stack

LayerPurposeTips
Retrieval-Augmented Generation (RAG)Ground AI responses in repository contextIndex embeddings for code, docs and tests.
Toolformer Integration (Language Models Can Teach Themselves to Use Tools)Allow models to invoke build, test, or lint toolsDefine clear tool APIs and enforce structured outputs
GuardrailsEnforce safety, compliance, and output qualityBlock secrets and unsafe commands automatically
Feedback LoopLearn from developer editsCollect feedback directly in IDE or review flow

Measuring Real Impact

Stop chasing accuracy scores. Evaluate:

  • Context relevance: How code fits the repo style
  • Semantic fidelity: How logic aligns with intended behaviour
  • Maintenance impact: How AI output reduces long-term complexity

Combine tests, static analysis, and labeled feedback for continuous benchmarking.


Governance Is Non-Optional

Treat models like any other part of the software supply chain:

  • Maintain model cards with provenance and known issues.
  • Run red-team tests for data leaks and injection.
  • Enforce sandboxed execution for generated code.
  • Require human review for AI-authored diffs

Team Maturity Levels

LevelDescription
CopilotManual AI suggestions
AutonomousContext-aware code proposals
CollaborativeMulti-agent workflows (plan, code, test)

Track efficiency via Mean Time to Correct Suggestion (MTCS) on how quickly AI improves from feedback.


Example

This is an example for a claude.md rule

- **Exceptions**: Never throw exceptions that are caught locally—use control flow instead

Bad example:

function getUser(id) {
  try {
    if (!id) throw new Error('Missing ID');
    return db.findUser(id);
  } catch (e) {
    return null;
  }
}

The exception is thrown and caught in the same function - wasting performance and obscuring control flow.

Good example:

function getUser(id) {
  if (!id) return null;
  return db.findUser(id);
}

Simplifies logic, avoids unnecessary try/catch overhead, and makes failure paths explicit.

When try/catch is correct

// External system error
try {
  const data = fs.readFileSync('config.json');
  return JSON.parse(data);
} catch (err) {
  log.error('Failed to read config:', err);
  return defaultConfig;
}

// API call failure
try {
  const result = await externalApi.fetch();
  return result.data;
} catch (err) {
  retryRequest(err);
} 

Here try/catch handles I/O or network errors, not logic flow.