Skip to content
Published on

SW Development Methodology & Vibe Coding: From Traditional Methods to AI-Assisted Development

Authors

SW Development Methodology & Vibe Coding: From Traditional Methods to AI-Assisted Development

Software development methodologies have evolved over decades. From Waterfall in the 1970s to the Agile Manifesto in 2001, and now to AI-assisted development (Vibe Coding) post-2025 — each era has brought continuous innovation in building better software. This guide explores the core principles of traditional methodologies and practical strategies for integrating AI across the entire development lifecycle.


1. Traditional SW Development Methodologies

1.1 Waterfall

Waterfall is a sequential development methodology proposed by Winston Royce in 1970. Each phase must be completed before the next begins.

Phase Flow:

RequirementsSystem DesignImplementationIntegration/TestingOperations/Maintenance

Advantages:

  • Clear phases and deliverables
  • Thorough documentation enables easier maintenance
  • Suitable for fixed scope and budget management
  • Still used in regulated industries (medical, aerospace)

Disadvantages:

  • Very rigid when requirements change
  • Testing concentrated at the end means defects discovered late
  • Customer feedback incorporated too late
  • High failure rate in large projects (~70% per CHAOS Report)

When Waterfall is appropriate:

  • Short-term projects with fully confirmed requirements
  • Domains with strong regulatory/compliance requirements
  • Fixed-scope contract-based outsourcing projects

1.2 V-Model

The V-Model is a variation of Waterfall that explicitly maps a corresponding test phase to each development phase.

Requirements Analysis ─────────── Acceptance Testing
  System Design ─────────────── System Testing
    Architecture Design ───────── Integration Testing
      Detailed Design ───────────── Unit Testing
              Implementation (Coding)

The core of V-Model is "Verification and Validation." The left descending side represents development activities; the right ascending side represents testing activities. Widely used in embedded systems and medical device software development.

1.3 Spiral Model

The Spiral model proposed by Barry Boehm in 1986 is an iterative methodology centered on risk analysis.

4 Quadrant Iteration:

  1. Identify objectives, alternatives, and constraints
  2. Evaluate and resolve risks
  3. Develop and verify
  4. Plan the next phase

Each spiral produces more concrete deliverables than the previous. Suitable for high-risk large government/defense projects, but complex and costly.


2. Agile Methodologies

2.1 The Birth of the Agile Manifesto

In February 2001, 17 software developers gathered at Snowbird, Utah, and wrote the Manifesto for Agile Software Development.

4 Core Values:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

Key Principles from the 12:

  • Continuous delivery of valuable software for customer satisfaction
  • Deliver working software frequently (2 weeks to 2 months)
  • Continuous attention to technical excellence and good design
  • Simplicity — the art of maximizing the amount of work not done

2.2 Scrum Framework

Scrum is the most widely used Agile framework. Developed by Ken Schwaber and Jeff Sutherland in the 1990s.

Three Pillars of Scrum:

  • Transparency: All processes and work visible to everyone
  • Inspection: Regularly review progress
  • Adaptation: Immediately adjust when problems are found

Scrum Team Composition:

  • Product Owner (PO): Manages backlog, maximizes business value
  • Scrum Master: Team coach, removes impediments, guards the process
  • Development Team: Self-organizing, 3 to 9 developers

Scrum Ceremonies:

EventCadencePurposeTime-box
Sprint1 to 4 weeksDevelopment cycle
Sprint PlanningSprint startSelect goal and work8 hrs/month
Daily ScrumDailySync, identify blockers15 minutes
Sprint ReviewSprint endDemo completed work4 hrs/month
Sprint RetrospectiveSprint endProcess improvement3 hrs/month

Scrum Artifacts:

  • Product Backlog: Prioritized list of all requirements (PO owns)
  • Sprint Backlog: List of work to complete in current Sprint
  • Increment: Potentially releasable product at Sprint end

Definition of Done Example:

  • Code review complete
  • Unit tests passing (80%+ coverage)
  • Integration tests passing
  • Documentation updated
  • Staging deployment successful

2.3 Kanban

Kanban is a visual workflow management methodology originating from Toyota's production system.

Core Principles:

  1. Visualize current work
  2. Limit Work in Progress (WIP)
  3. Manage flow
  4. Make process policies explicit
  5. Implement feedback loops
  6. Improve collaboratively

Kanban Board Example:

Backlog | Ready | In Progress (WIP: 3) | Review | Done
--------|-------|----------------------|--------|------
Task 5  |Task 4 |  Task 3              |Task 2  |Task 1
Task 6  |       |  Task 7              |        |
        |       |  Task 8              |        |

Scrum vs Kanban:

ItemScrumKanban
CadenceFixed SprintContinuous flow
ChangesLimited within SprintAnytime
RolesClear defined rolesNo prescribed roles
MetricsVelocityCycle time, throughput
Best forNew feature devMaintenance, ops

2.4 SAFe (Scaled Agile Framework)

SAFe is a framework for applying Agile in large organizations.

4 Levels:

  1. Team Level: Scrum/Kanban teams (5 to 11 people)
  2. Program Level: Agile Release Train (ART, 50 to 125 people)
  3. Large Solution Level: Coordinating multiple ARTs
  4. Portfolio Level: Strategy and investment decisions

Program Increment (PI) Planning:

  • 8 to 12 week cadence
  • 2-day event with the entire ART
  • Visualize goals and dependencies for the next PI

3. Requirements Phase with Vibe Coding

3.1 User Stories + AI

Traditional User Story format:

As a [role], I want [feature], so that [benefit].

Using AI for User Story creation:

Prompt Example:

Write User Stories from the following business requirement:
"Customers of an online shop want product recommendations based on purchase history"

Format:
- User Story (As/I want/So that)
- Acceptance Criteria (Given/When/Then)
- Story Points estimation
- 5 potential edge cases

AI-Generated User Story Example:

User Story: Personalized Product Recommendations
As a repeat customer,
I want to see product recommendations based on my purchase history on the home screen,
So that I can quickly discover products I am interested in.

Acceptance Criteria:
- Given a customer with 3+ purchases
- When they access the home screen
- Then the top 10 recommendations based on purchase history are displayed
- And each recommendation shows "why it was recommended"
- And a "not interested" button hides that recommendation

3.2 Event Storming with Claude

Event Storming is a domain exploration workshop technique developed by Alberto Brandolini. You can run digital Event Storming sessions with Claude.

Workshop Flow:

Step 1: Identify Domain Events (orange sticky notes)
Ask Claude: "List all domain events in the e-commerce order domain"

Step 2: Identify Commands (blue sticky notes)
"What commands trigger each event?"

Step 3: Identify Actors (yellow sticky notes)
"Who (user/system) executes each command?"

Step 4: Identify Aggregates (large yellow sticky notes)
"What aggregates group related events and commands?"

Step 5: Define Bounded Contexts
"Group aggregates into logical boundaries"

Claude Prompt Example:

Run an Event Storming for the order processing domain.
Output format:

Domain Events (past tense):
- OrderPlaced
- PaymentProcessed
...

For each event:
1. Triggering command
2. Preconditions
3. Postconditions
4. Related aggregate

3.3 Providing Context with CLAUDE.md

CLAUDE.md is the context file Claude reads when first encountering a project. A well-written CLAUDE.md from the requirements phase greatly improves AI-assisted development quality.

CLAUDE.md Example:

# Project Context

## Project Overview

E-commerce platform backend service (Node.js + TypeScript)

## Tech Stack

- Runtime: Node.js 20, TypeScript 5.3
- Framework: NestJS 10
- Database: PostgreSQL 16 (TypeORM), Redis 7
- Message Queue: RabbitMQ
- Testing: Jest, Supertest

## Architecture Decisions

- DDD (Domain-Driven Design)
- CQRS pattern
- Event Sourcing (Order aggregate)

## Coding Conventions

- Prefer functional programming
- Use immutable data structures
- Error handling: Result type pattern (neverthrow library)

## Forbidden Patterns

- No 'any' type usage
- Use winston logger instead of console.log
- No direct DB queries (use Repository pattern)

## Test Requirements

- Unit test coverage 90%+
- Integration tests required for all public APIs
- E2E tests: core payment flow

## Current Sprint Context

Sprint 3: Recommendation system implementation

- User Story: US-123 Purchase history-based recommendations
- Tech debt: Legacy ProductService needs refactoring

4. Design Phase: AI-Assisted Design

4.1 Architecture Decision Records (ADRs)

ADRs document important architectural decisions and their rationale. Writing ADRs with AI enables more systematic decision-making.

ADR Template:

# ADR-001: Database Selection

## Status

Approved (2026-03-17)

## Context

A database is needed for the recommendation system.
Requirements:

- Handle 10,000 read requests per second
- Graph queries between products
- Real-time updates

## Decision

Select PostgreSQL + pgvector extension

## Alternatives

1. MongoDB: Flexible schema but weak at complex relationship queries
2. Neo4j: Graph DB but team lacks experience, operational complexity
3. Redis + PostgreSQL: Added caching layer for performance

## Consequences

- Positive: pgvector enables vector similarity search
- Positive: Leverages team's PostgreSQL experience
- Negative: Performance limits with very large graph queries
- Negative: Learning required for pgvector setup and indexing

Writing ADRs with Claude:

Write an ADR for our message queue selection.
Requirements:
- Process 50,000 messages per second
- At-least-once delivery guarantee
- Message reprocessing capability
- Team size: 5 people

Candidates: Kafka, RabbitMQ, AWS SQS
Recommend the optimal choice for our situation with pros/cons of each.

4.2 AI-Assisted Design Review

Review the following class diagram:
[paste diagram]

Check for:

1. SOLID principle violations
2. Circular dependencies
3. Interface segregation opportunities
4. Aggregate boundary appropriateness from DDD perspective
5. Improvement suggestions

5. Implementation Phase: Vibe Coding in Practice

5.1 AI-Assisted TDD (Test-Driven Development)

Traditional TDD: Red → Green → Refactor

AI-Assisted TDD Flow:

1. Explain requirements to Claude
2. Claude generates failing test code
3. Developer reviews and adjusts tests
4. Claude generates minimal implementation to pass tests
5. Developer and Claude refactor together

Example — Recommendation Service TDD:

// Step 1: Failing tests generated by Claude
describe('RecommendationService', () => {
  describe('getPersonalizedRecommendations', () => {
    it('should return top 10 recommendations based on purchase history', async () => {
      // Arrange
      const userId = 'user-123'
      const purchaseHistory = [
        { productId: 'prod-1', category: 'Electronics', purchasedAt: new Date() },
        { productId: 'prod-2', category: 'Electronics', purchasedAt: new Date() },
      ]
      mockPurchaseRepository.findByUserId.mockResolvedValue(purchaseHistory)
      mockProductRepository.findSimilar.mockResolvedValue(generateMockProducts(15))

      // Act
      const result = await service.getPersonalizedRecommendations(userId)

      // Assert
      expect(result).toHaveLength(10)
      expect(result[0]).toMatchObject({
        productId: expect.any(String),
        score: expect.any(Number),
        reason: expect.any(String),
      })
      expect(mockPurchaseRepository.findByUserId).toHaveBeenCalledWith(userId)
    })

    it('should return popular products when no purchase history exists', async () => {
      mockPurchaseRepository.findByUserId.mockResolvedValue([])
      mockProductRepository.findPopular.mockResolvedValue(generateMockProducts(10))

      const result = await service.getPersonalizedRecommendations('new-user')

      expect(result).toHaveLength(10)
      expect(mockProductRepository.findPopular).toHaveBeenCalled()
    })

    it('should exclude already purchased products from recommendations', async () => {
      const purchasedProductId = 'prod-already-bought'
      mockPurchaseRepository.findByUserId.mockResolvedValue([
        { productId: purchasedProductId, category: 'Books' },
      ])
      const recommendations = generateMockProducts(12)
      recommendations[0].productId = purchasedProductId
      mockProductRepository.findSimilar.mockResolvedValue(recommendations)

      const result = await service.getPersonalizedRecommendations('user-123')

      expect(result.map((r) => r.productId)).not.toContain(purchasedProductId)
    })
  })
})

5.2 Pair Programming with AI

Effective AI pair programming patterns:

Pattern 1: Driver-Navigator

  • Developer as driver (writes code)
  • Claude as navigator (provides direction, reviews)

Pattern 2: Ping-Pong TDD

  • Claude writes tests
  • Developer implements
  • Claude suggests refactoring
  • Developer accepts/rejects

Pattern 3: Exploratory Programming

  • Claude generates rapid prototype
  • Developer experiments and learns
  • Together refine to production quality

5.3 Code Review with Claude

Review the following PR:
[paste code]

Review criteria:

1. Bugs and edge cases
2. Performance issues (N+1 queries, memory leaks)
3. Security vulnerabilities (SQL injection, XSS, auth/authz)
4. Coding conventions (per CLAUDE.md)
5. Test coverage
6. Readability and maintainability

For each issue:

- Severity: Critical/Major/Minor
- Improved code example
- Explanation of why

6. Testing Phase: AI-Powered Test Automation

6.1 AI-Generated Test Cases

Automated Equivalence Partitioning and Boundary Value Analysis:

Generate comprehensive test cases for the following function:

function calculateShippingFee(weight: number, distance: number, isPremium: boolean): number

Business rules:

- Weight 0-1kg: base fee $3
- Weight 1-5kg: add $0.50/kg
- Weight over 5kg: 20% surcharge on total
- Distance over 100km: add $1
- Premium member: 10% discount
- Free shipping: premium + under 10kg

When generating test cases:

- Apply boundary value analysis
- Identify equivalence partition classes
- Include negatives, zero, extreme values
- Include compound condition tests
- Write in Jest format

6.2 Mutation Testing

Mutation testing measures the quality of your tests by injecting intentional bugs (mutants) into the code and checking whether your tests catch them.

Stryker.js Configuration:

{
  "mutate": ["src/**/*.ts", "!src/**/*.spec.ts"],
  "testRunner": "jest",
  "reporters": ["html", "clear-text", "progress"],
  "thresholds": {
    "high": 80,
    "low": 60,
    "break": 50
  }
}

Analyzing Mutation Results with Claude:

Analyze the following mutation testing results and add tests to kill surviving mutants:

Surviving mutants:
1. Line 45: Changed '>' to '>=' — tests did not detect
2. Line 72: Changed '&&' to '||' — tests did not detect

Current test code:
[paste test code]

6.3 Property-Based Testing

import fc from 'fast-check'

describe('ShippingFee property-based tests', () => {
  it('premium members always pay less than or equal to regular members', () => {
    fc.assert(
      fc.property(
        fc.float({ min: 0.1, max: 100 }),
        fc.integer({ min: 1, max: 1000 }),
        (weight, distance) => {
          const regularFee = calculateShippingFee(weight, distance, false)
          const premiumFee = calculateShippingFee(weight, distance, true)
          return premiumFee <= regularFee
        }
      )
    )
  })

  it('shipping fee is always non-negative', () => {
    fc.assert(
      fc.property(
        fc.float({ min: 0, max: 1000 }),
        fc.integer({ min: 0, max: 10000 }),
        fc.boolean(),
        (weight, distance, isPremium) => {
          return calculateShippingFee(weight, distance, isPremium) >= 0
        }
      )
    )
  })
})

7. Complete Guide to Writing CLAUDE.md

A well-crafted CLAUDE.md is the key factor determining the quality of AI-assisted development.

# ProjectName — Claude Context File

## Project Overview

- Purpose: [one sentence]
- Stage: [MVP/Beta/Production]
- Team size: [n people]
- Domain: [e-commerce/fintech/healthcare etc.]

## Tech Stack

### Backend

- Language: TypeScript 5.3 (strict mode)
- Runtime: Node.js 20 LTS
- Framework: NestJS 10
- ORM: TypeORM 0.3
- Database: PostgreSQL 16

### Frontend

- Framework: Next.js 14 (App Router)
- Styling: Tailwind CSS 3.4
- State: Zustand 4

### Infrastructure

- Container: Docker + Kubernetes
- CI/CD: GitHub Actions
- Cloud: AWS (EKS, RDS, ElastiCache)

## Architecture

- Pattern: DDD + CQRS + Event Sourcing
- Service separation: Modular monolith
- API: REST + WebSocket (real-time notifications)

## Coding Rules

### Must Follow

- TypeScript strict mode: no 'any' type ever
- Function length: keep under 30 lines
- Nesting depth: 3 levels max
- Error handling: Result pattern (neverthrow)
- Logging: winston logger (no console.log)

### Naming Conventions

- Classes: PascalCase
- Functions/variables: camelCase
- Constants: UPPER_SNAKE_CASE
- Files: kebab-case.ts
- Test files: \*.spec.ts

## Test Strategy

- Unit tests: Domain logic (90% coverage)
- Integration tests: API endpoints (80% coverage)
- E2E: 3 core business flows
- Mutation testing: mutant survival rate under 20%

## Current Sprint Context

Sprint 3 (2026-03-10 to 2026-03-24)
Goal: Recommendation system v1 launch

In progress:

- US-123: Purchase history recommendation API
- US-124: Real-time recommendation updates via WebSocket

Technical debt:

- Legacy ProductService refactoring needed
- UserRepository test coverage improvement needed

## Forbidden Patterns

- Direct SQL queries (use Repository pattern)
- Synchronous file I/O (fs.readFileSync etc.)
- Hardcoded secrets (use environment variables)
- Indiscriminate try-catch (specify error types)

8. Practical Example: Next.js App Vibe Coding Workflow

8.1 Project Initialization

# Step 1: Create project
npx create-next-app@latest my-shop --typescript --tailwind --app

# Step 2: Create CLAUDE.md (first Claude prompt)

CLAUDE.md Initial Setup Prompt:

Starting a Next.js 14 App Router-based e-commerce project.
Generate CLAUDE.md with these requirements:

- Product listing, detail, cart, checkout features
- TypeScript strict mode
- Tailwind CSS + shadcn/ui
- Zustand state management
- React Query v5 for server state
- Prisma + PostgreSQL
- NextAuth.js authentication
- Vitest + Testing Library

Team conventions:
- Components: atomic design with shadcn/ui base
- Server components by default, minimize client components
- Error boundary + Suspense patterns

8.2 Feature Development Flow

Step 1: Define User Story

US-001: Add Product to Cart
As a shopper
I want to add a product to my cart from the product detail page
So that I can purchase multiple items at once later

Acceptance Criteria:
- Given I am on a product detail page
- When I click "Add to Cart"
- Then the product with selected options and quantity is added to cart
- And the cart icon updates with the new count
- And if already in cart, quantity is incremented

Step 2: Request Implementation

Implement the above User Story.

Technical requirements:
- Add cartSlice to Zustand store
- Server Action for cart server sync
- Apply Optimistic Update
- Real-time inventory check (useQuery)
- Error handling: out of stock, network error
- Animation: cart icon bounce effect

File structure:
- store/cartStore.ts
- components/product/AddToCartButton.tsx
- app/actions/cart.ts
- hooks/useCart.ts

Step 3: Write Tests

Write tests for the above implementation:
1. cartStore unit tests (Vitest)
2. AddToCartButton component tests (Testing Library)
3. cart Server Action integration tests
4. Property-based tests (inventory constraints)

Quiz

Q1: What is the fundamental difference between Waterfall and Agile?

Answer: Waterfall phases proceed sequentially making changes difficult, while Agile uses short iteration cycles (Sprints) to flexibly respond to changes.

Explanation: In Waterfall, the design phase cannot begin until the requirements phase is fully complete, meaning changes to requirements during design incur enormous costs. In contrast, Agile delivers working software every 2-4 week Sprint, enabling rapid incorporation of customer feedback. A core Agile value is "responding to change over following a plan."

Q2: What are the three Scrum roles and their responsibilities?

Answer: Product Owner (manages backlog priorities), Scrum Master (team coach and impediment remover), Development Team (self-organizing development).

Explanation: The Product Owner manages the Product Backlog and determines priorities to maximize business value. The Scrum Master coaches the team in Scrum practices and removes organizational impediments. The Development Team self-organizes to distribute and execute work within a Sprint. Critically, the Scrum Master is a servant leader, not the team's manager.

Q3: What are the 5 key elements to include in CLAUDE.md?

Answer: Tech stack, architecture decisions, coding conventions, current Sprint context, and forbidden patterns.

Explanation: CLAUDE.md helps AI understand project context. The tech stack ensures AI generates code in the correct language/framework. Architecture decisions maintain consistent design. Coding conventions produce code matching team style. Current Sprint context clarifies what to focus on. Forbidden patterns avoid known anti-patterns.

Q4: What does the "V shape" in V-Model represent?

Answer: The left descending side shows development phases (requirements to implementation), the right ascending side shows test phases corresponding to each development phase, representing the Verification and Validation relationship.

Explanation: The core of V-Model is that each development phase has a corresponding test phase. Requirements analysis corresponds to acceptance testing, system design to system testing, and detailed design to unit testing. This enables test planning from early in development and is widely used in embedded systems and medical device software.

Q5: When is property-based testing more advantageous than regular unit testing?

Answer: When validating business invariants, when there are many edge cases, and when testing functions with mathematical properties (sorting, inverse functions, etc.).

Explanation: Regular unit testing only tests cases the developer thought of. Property-based testing lets you define a property like "premium members always pay less than regular members," then automatically generates thousands of random inputs to verify the property always holds. It is very effective at discovering edge cases developers might miss, implemented with libraries like fast-check (JavaScript) or Hypothesis (Python).

Q6: What are the main Event Storming components and their color codes?

Answer: Orange (domain events), blue (commands), small yellow (actors), large yellow (aggregates), pink (hotspots/problems), green (external systems).

Explanation: Event Storming is a workshop where domain experts and developers explore the domain together. Domain events (orange) represent past facts like "OrderPlaced" or "PaymentProcessed." Commands (blue) are actions that trigger events. Aggregates (large yellow) are logical units grouping related events and commands. Using Claude enables rapid digital Event Storming sessions.


Summary

The choice of SW development methodology is a critical factor in project success. Traditional methodologies (Waterfall, V-Model) provide value in domains with clear requirements and minimal changes, while Agile (Scrum, Kanban) suits modern product development requiring high adaptability and rapid feedback.

Vibe Coding is not a replacement for specific methodologies, but a development paradigm that uses AI as a powerful partner at every phase. By providing clear context through CLAUDE.md and strategically integrating AI from User Story writing to code generation and test automation, you can dramatically improve development productivity.

Next Steps:

  • Apply the CLAUDE.md template to your current project
  • Introduce AI-assisted User Story writing in Sprint Planning
  • Start measuring test quality with mutation testing