Methodology & Process
Our AI-Augmented Development Methodology
Spec-Driven Design, Test-Driven Design, Human-in-the-Loop
Eastgate Software - German Engineering Standards. Enterprise-Grade Results.
Our AI-Augmented Development Methodology: Spec-Driven Design, Test-Driven Design, Human-in-the-Loop
Most teams prompt AI and hope for the best. We use a structured methodology where specifications become executable artifacts that AI agents build from - driving implementation, testing, and documentation from a single source of truth. Three pillars: Spec-Driven Design encodes requirements before code. Test-Driven Design validates with concrete examples. Human-in-the-Loop ensures AI augments judgment without replacing it.
Introduction
Why Do Most AI-Assisted Teams Still Struggle with Quality?
AI coding assistants changed how fast we write code. They did not change how hard it is to write the right code. The bottleneck was never typing speed - it was the gap between what the business needs and what the developer understands.
When requirements live in chat history, AI produces unpredictable results. No audit trail, no shared understanding, no way to verify intent. Teams iterate faster but rework more - because the input was never precise enough.
Our methodology introduces a lightweight specification layer between intent and implementation. Requirements are encoded as structured, behavior-first specifications with concrete examples before any code is written. The methodology draws on Specification by Example (Gojko Adzic, Manning 2011), the open-source OpenSpec framework (Fission AI, 33K+ stars), and our experience delivering systems for Siemens Mobility, FinTech platforms, and enterprise SaaS.
Part I
What Are the Three Pillars?
Not sequential phases - they operate in parallel across the entire lifecycle.
Spec-Driven Design (SDD)
Requirements encoded as structured, behavior-first specifications before code is written. AI builds from specs - not ad-hoc prompts.
Test-Driven Design (TDD)
Every requirement is illustrated with concrete examples that define 'done' before implementation starts. Tests generated from specs, validated continuously.
Human-in-the-Loop (HITL)
AI accelerates every phase, but humans own decisions. Specs are collaboratively written. AI proposes; humans approve at every gate.
The common thread: SDD encodes intent. TDD validates it with examples. HITL ensures humans own decisions. Together, they close the gap between requirements and shipped software.
Part II
How Does the Spec-Driven Workflow Operate?
A propose-apply-archive pattern adapted from OpenSpec. Each change is an isolated workspace. Specs describe observable behavior using GIVEN/WHEN/THEN scenarios - not implementation details.
| Step | Name | What Happens | Artifact |
|---|---|---|---|
| 1 | Propose | Define intent, scope, and approach. AI generates a structured proposal. | proposal.md |
| 2 | Specify | Encode requirements as GIVEN/WHEN/THEN scenarios. What the system should do, not how. | specs/ (delta) |
| 3 | Design | AI generates technical strategy from approved specs. Architecture decisions documented. | design.md |
| 4 | Decompose | Break design into ordered task list. Each task sized for a single focused session. | tasks.md |
| 5 | Implement | AI agents build from task list, checking off items. Specs drive generation. | Code + tests |
| 6 | Verify | Validate against specs: completeness, correctness, coherence. | Verification report |
| 7 | Archive | Merge delta specs into main tree. Living documentation updated automatically. | Updated specs/ |
Fluid, not rigid: No enforced phase gates. Fast-forward through all artifacts for clear requirements. Step through one at a time for exploratory work. Changes expressed as deltas (ADDED/MODIFIED/REMOVED) - not full rewrites. On archive, deltas merge into the main spec tree automatically.
Part III
How Does Spec-Driven Compare to Prompt-Driven Development?
| Dimension | Prompt-Driven | Spec-Driven |
|---|---|---|
| Input to AI | Chat message | Structured requirements with scenarios |
| Repeatability | Different output each time | Same spec, consistent results |
| Auditability | Lost in chat history | Version-controlled alongside code |
| Collaboration | One person's interpretation | Team-reviewed, stakeholder-approved |
| Testing | Tests written after code | Tests derived from spec before code |
| Documentation | Written manually after the fact | Generated automatically from specs |
| Change mgmt | Re-explain full context every time | Delta specs show only what changes |
The shift: Prompt-driven treats AI as a conversational partner. Spec-driven treats AI as a builder reading blueprints. The blueprint can be reviewed, versioned, shared, and executed - the conversation cannot.
Part IV
What Separates Teams That Ship Predictably from Those That Rework?
Anti-pattern
Prompt AI and hope for the best
Best practice
Encode requirements as structured specs before code
Anti-pattern
Requirements in Jira tickets and Slack
Best practice
Specs in version control alongside code
Anti-pattern
Tests written after implementation
Best practice
Acceptance criteria defined before code
Anti-pattern
Business users review after delivery
Best practice
Stakeholders collaborate on specs before implementation
Anti-pattern
Documentation written manually after release
Best practice
Living documentation from executable specs
Anti-pattern
Full spec ceremony for every change
Best practice
Right-size: full SDD for features; lightweight for fixes
Part V
What Does the Practical Toolstack Look Like?
The methodology is tool-agnostic. Here is the stack we use in practice across six lifecycle phases.
Specification
OpenSpec
SDD framework for AI coding assistants
Kiro
AWS spec-first IDE with Claude
Claude Code
Terminal-native agentic coding
Development
Claude Code
Agentic multi-file coding
Cursor
IDE with AI editing and composer
GitHub Copilot
Inline completion and chat
Testing
Playwright
E2E browser automation
Vitest / Jest
Unit and integration tests
SonarQube
Static analysis and quality gates
Code Review
CodeRabbit
AI-powered contextual PR review
Copilot Review
AI analysis in GitHub PRs
ESLint / Prettier
Style and formatting enforcement
CI/CD
GitHub Actions
Pipeline orchestration
ArgoCD
GitOps continuous delivery
Docker / K8s
Container orchestration
Monitoring
Datadog
Full-stack APM
Sentry
Error tracking
Grafana
Dashboards and alerting
Tool-agnostic by design: OpenSpec supports 20+ AI coding tools. The value is in the structured specification process - not in any single tool.
FAQ
Common Questions
How is this different from traditional Agile? +
Agile ceremonies still apply. The difference is that AI has a precise, auditable input (structured specs with GIVEN/WHEN/THEN criteria) instead of ambiguous user stories. Teams collaborate on specifications before implementation, reducing rework caused by misunderstanding.
Does this slow down development? +
Upfront, writing specs takes more time than jumping into code. But the net effect is faster delivery because AI-generated code matches intent on the first pass. For small bug fixes, we skip the full spec process - the overhead only pays off for features and greenfield projects.
What is OpenSpec? +
OpenSpec is an open-source SDD framework (MIT license, 33,000+ GitHub stars) for AI coding assistants. It is tool-agnostic (works with Claude Code, Cursor, Copilot, and 20+ others), brownfield-first, and iterative. Its propose-apply-archive workflow aligns with how we structure client engagements.
How does human-in-the-loop work in practice? +
Every critical transition has an approval gate. A developer and stakeholder collaborate on a proposal. AI generates the spec draft. The team reviews. AI implements from the approved spec. A human verifies. Nothing ships without sign-off. Human corrections are captured in spec history, so AI improves over time.
Can this work with our existing workflow? +
Yes. It layers on top of your existing Git workflow, CI/CD, and project management. Specs are stored alongside code. Most teams adopt incrementally: start with one feature, measure results, then expand.
Read the Full White Paper
Detailed framework, implementation methodology, and actionable insights - available instantly with your business email.
About Eastgate Software
Eastgate Software is a strategic engineering partner headquartered in Hanoi, Vietnam, with offices in Aachen, Germany and Tokyo, Japan. With 200+ engineers, 93% team retention, and 12+ years of delivery excellence, we build mission-critical systems for clients including Siemens Mobility, Yunex Traffic, and Autobahn.
Our AI-augmented delivery methodology combines German engineering discipline with Vietnamese engineering talent to deliver enterprise-grade results across Intelligent Transportation, FinTech, Retail, and Manufacturing.
Contact: [email protected] | (+84) 246.276.3566 | eastgate-software.com
Ready to Transform Your Engineering Process?
See how our AI-augmented methodology can accelerate your delivery. Start with a 2-week pilot on a real project.
Engineers
AI-augmented delivery
Retention
Partners, not vendors
Years
Enterprise delivery