Skip to content
Published on

Event Sourcing and CQRS Pattern Implementation Guide: From Design to Operations

Authors
Event Sourcing and CQRS Pattern Implementation Guide: From Design to Operations

Why Event Sourcing and CQRS Are Discussed Together

Event Sourcing and CQRS (Command Query Responsibility Segregation) are independent patterns, but in production environments they are almost always used together. When you record the complete history of state changes in an event stream with Event Sourcing, the current state can only be restored through event replay, causing read performance to drop sharply. Separating read-only projections (Read Models) with CQRS solves this problem.

This article goes beyond conceptual explanations. It covers building an event store based on EventStoreDB, implementing command handlers and projections in TypeScript and Python, snapshot strategies, event schema versioning, and disaster recovery procedures at an operational level. It also reflects the rebranding of EventStoreDB to Kurrent after 2025 and the latest gRPC client API.

Overall Architecture

The complete flow of an Event Sourcing + CQRS system works as follows:

  1. The client sends a Command
  2. The command handler validates domain invariants
  3. Upon passing validation, it creates domain events and appends them to the event store
  4. The event store pushes events to subscribers
  5. Projection handlers receive events and update the Read Model
  6. Query handlers retrieve data from the Read Model and return it to the client

Core principle: The write path (Command Path) and read path (Query Path) are completely separated. The write side only writes to the event store, and the read side only queries from projected read models.

Event Store Solution Comparison

Here is a comparison of production-ready event store solutions. Choosing the right one depends on your technology stack and operational environment.

SolutionLanguage EcosystemStorage MethodBuilt-in ProjectionsLicenseOperational Complexity
EventStoreDB (Kurrent)Language-agnostic (gRPC)Dedicated file systemYes (JS-based)BSL (free for commercial)Medium
Axon ServerJVM (Java/Kotlin)Dedicated storageNo (handled by Framework)Open Source + EnterpriseMedium
Marten.NET (C#)PostgreSQLYes (C#-based)MITLow
EventSourcingDBLanguage-agnostic (gRPC)Dedicated engineNoOpen SourceLow
PostgreSQL + DIYLanguage-agnosticRDBMSNo (build your own)Open SourceHigh
MongoDB + DIYLanguage-agnosticDocument DBNo (build your own)SSPLHigh

EventStoreDB is a dedicated event store designed by Greg Young, providing native support for stream-based append-only storage, built-in projections, catch-up subscriptions, and other features essential for event sourcing. This article uses EventStoreDB as the basis for implementation examples.

Domain Event Design

In an event sourcing system, domain events are the most important contract of the system. Once stored, events are never modified or deleted. Therefore, careful event schema design is essential.

Event Design Principles

  • Name events using past-tense verbs: OrderCreated, PaymentProcessed, ItemShipped
  • Capture business intent: RoomBooked rather than ReservationStatusChanged
  • Events must be self-contained: it should be possible to fully understand what happened from a single event
  • Use only simple types: string, number, boolean, arrays. Do not embed Value Objects directly in events
  • Include a version field: essential for schema evolution

TypeScript Event Definitions

// events/order-events.ts

interface BaseEvent {
  eventType: string
  eventVersion: number
  aggregateId: string
  timestamp: string
  metadata: {
    correlationId: string
    causationId: string
    userId: string
  }
}

interface OrderCreated extends BaseEvent {
  eventType: 'OrderCreated'
  eventVersion: 1
  data: {
    orderId: string
    customerId: string
    items: Array<{
      productId: string
      productName: string
      quantity: number
      unitPrice: number
    }>
    totalAmount: number
    currency: string
    shippingAddress: {
      street: string
      city: string
      zipCode: string
      country: string
    }
  }
}

interface OrderPaymentConfirmed extends BaseEvent {
  eventType: 'OrderPaymentConfirmed'
  eventVersion: 1
  data: {
    orderId: string
    paymentId: string
    amount: number
    method: 'CREDIT_CARD' | 'BANK_TRANSFER' | 'WALLET'
    confirmedAt: string
  }
}

interface OrderCancelled extends BaseEvent {
  eventType: 'OrderCancelled'
  eventVersion: 1
  data: {
    orderId: string
    reason: string
    cancelledBy: string
    refundAmount: number
  }
}

type OrderEvent = OrderCreated | OrderPaymentConfirmed | OrderCancelled

Always include correlationId and causationId in the metadata. In distributed systems where a single user request generates multiple events, troubleshooting becomes impossible without these two fields.

Command Handler and Aggregate Implementation

The command handler is responsible for the write path of an event sourcing system. The Aggregate is the boundary that enforces domain invariants, restoring the current state by replaying events.

TypeScript Aggregate Implementation

// aggregates/order-aggregate.ts

import { EventStoreDBClient, jsonEvent, FORWARDS, START } from '@eventstore/db-client'

interface OrderState {
  orderId: string
  status: 'CREATED' | 'PAID' | 'SHIPPED' | 'CANCELLED'
  customerId: string
  totalAmount: number
  items: Array<{ productId: string; quantity: number; unitPrice: number }>
  version: number
}

class OrderAggregate {
  private state: OrderState
  private pendingEvents: OrderEvent[] = []

  constructor() {
    this.state = {
      orderId: '',
      status: 'CREATED',
      customerId: '',
      totalAmount: 0,
      items: [],
      version: -1,
    }
  }

  // Restore state through event replay
  static async load(client: EventStoreDBClient, orderId: string): Promise<OrderAggregate> {
    const aggregate = new OrderAggregate()
    const streamName = `order-${orderId}`

    const events = client.readStream(streamName, {
      direction: FORWARDS,
      fromRevision: START,
    })

    for await (const resolvedEvent of events) {
      const event = resolvedEvent.event
      if (event) {
        aggregate.apply(event.data as OrderEvent, false)
        aggregate.state.version = Number(resolvedEvent.event!.revision)
      }
    }

    return aggregate
  }

  // Command: Create order
  createOrder(command: {
    orderId: string
    customerId: string
    items: Array<{ productId: string; quantity: number; unitPrice: number }>
  }): void {
    // Invariant validation
    if (this.state.orderId !== '') {
      throw new Error(`Order ${command.orderId} already exists`)
    }
    if (command.items.length === 0) {
      throw new Error('Order must contain at least one item')
    }

    const totalAmount = command.items.reduce((sum, item) => sum + item.quantity * item.unitPrice, 0)

    const event: OrderCreated = {
      eventType: 'OrderCreated',
      eventVersion: 1,
      aggregateId: command.orderId,
      timestamp: new Date().toISOString(),
      metadata: { correlationId: '', causationId: '', userId: '' },
      data: {
        orderId: command.orderId,
        customerId: command.customerId,
        items: command.items.map((i) => ({
          ...i,
          productName: '', // Populated on query
        })),
        totalAmount,
        currency: 'KRW',
        shippingAddress: { street: '', city: '', zipCode: '', country: 'KR' },
      },
    }

    this.apply(event, true)
  }

  // Command: Confirm payment
  confirmPayment(
    paymentId: string,
    amount: number,
    method: 'CREDIT_CARD' | 'BANK_TRANSFER' | 'WALLET'
  ): void {
    if (this.state.status !== 'CREATED') {
      throw new Error(`Cannot confirm payment for order in status: ${this.state.status}`)
    }
    if (amount !== this.state.totalAmount) {
      throw new Error(
        `Payment amount ${amount} does not match order total ${this.state.totalAmount}`
      )
    }

    const event: OrderPaymentConfirmed = {
      eventType: 'OrderPaymentConfirmed',
      eventVersion: 1,
      aggregateId: this.state.orderId,
      timestamp: new Date().toISOString(),
      metadata: { correlationId: '', causationId: '', userId: '' },
      data: {
        orderId: this.state.orderId,
        paymentId,
        amount,
        method,
        confirmedAt: new Date().toISOString(),
      },
    }

    this.apply(event, true)
  }

  // Apply event (state transition)
  private apply(event: OrderEvent, isNew: boolean): void {
    switch (event.eventType) {
      case 'OrderCreated':
        this.state.orderId = event.data.orderId
        this.state.customerId = event.data.customerId
        this.state.totalAmount = event.data.totalAmount
        this.state.items = event.data.items
        this.state.status = 'CREATED'
        break
      case 'OrderPaymentConfirmed':
        this.state.status = 'PAID'
        break
      case 'OrderCancelled':
        this.state.status = 'CANCELLED'
        break
    }

    if (isNew) {
      this.pendingEvents.push(event)
    }
  }

  // Save to event store
  async save(client: EventStoreDBClient): Promise<void> {
    if (this.pendingEvents.length === 0) return

    const streamName = `order-${this.state.orderId}`
    const events = this.pendingEvents.map((e) =>
      jsonEvent({ type: e.eventType, data: e.data, metadata: e.metadata })
    )

    await client.appendToStream(streamName, events, {
      expectedRevision: this.state.version === -1 ? 'no_stream' : BigInt(this.state.version),
    })

    this.pendingEvents = []
  }
}

The expectedRevision parameter is the core of Optimistic Concurrency Control. When two commands are executed simultaneously against the same Aggregate, the first one to save succeeds and the latter receives a WrongExpectedVersionError. The retry logic must reload the events and re-execute the command.

Projection Design and Implementation

Projections are the process of building read models from event streams. In an event sourcing system, projections realize the core advantage of being able to freely create views in any desired shape.

Projection Pattern Classification

  • Inline Projection: Updates the read model simultaneously with event storage. Guarantees strong consistency but degrades write performance.
  • Async Projection: Updates the read model asynchronously through subscriptions. Provides eventual consistency but offers good write performance. Most production systems adopt this approach.
  • Live Projection: Replays events for every request. Always guarantees the latest state, but performance drops sharply when there are many events.

Python Async Projection Implementation

The following is an example of an async projection implementation using EventStoreDB's Persistent Subscription in Python.

# projections/order_summary_projection.py

import asyncio
import json
from datetime import datetime
from esdbclient import EventStoreDBClient, NewEvent, StreamState
from dataclasses import dataclass, asdict
from typing import Optional
import asyncpg

@dataclass
class OrderSummaryReadModel:
    order_id: str
    customer_id: str
    status: str
    total_amount: float
    item_count: int
    created_at: str
    updated_at: str
    payment_method: Optional[str] = None
    cancelled_reason: Optional[str] = None


class OrderSummaryProjection:
    def __init__(self, esdb_client: EventStoreDBClient, pg_pool: asyncpg.Pool):
        self.esdb = esdb_client
        self.pg_pool = pg_pool
        self._checkpoint_interval = 100
        self._processed_count = 0

    async def start(self):
        """Start projection with catch-up subscription"""
        last_position = await self._load_checkpoint()

        subscription = self.esdb.subscribe_to_all(
            from_position=last_position,
            filter_include=[r"order-.*"],  # Subscribe only to streams starting with order-
        )

        for event in subscription:
            try:
                await self._handle_event(event)
                self._processed_count += 1

                # Periodically save checkpoint
                if self._processed_count % self._checkpoint_interval == 0:
                    await self._save_checkpoint(event.commit_position)

            except Exception as e:
                print(f"Projection error at position {event.commit_position}: {e}")
                # Save checkpoint on error and restart
                await self._save_checkpoint(event.commit_position)
                raise

    async def _handle_event(self, event):
        """Route to handler by event type"""
        handler_map = {
            'OrderCreated': self._on_order_created,
            'OrderPaymentConfirmed': self._on_payment_confirmed,
            'OrderCancelled': self._on_order_cancelled,
        }

        handler = handler_map.get(event.type)
        if handler:
            data = json.loads(event.data)
            await handler(data)

    async def _on_order_created(self, data: dict):
        """Handle order created event - INSERT into read model"""
        async with self.pg_pool.acquire() as conn:
            await conn.execute("""
                INSERT INTO order_summary (
                    order_id, customer_id, status, total_amount,
                    item_count, created_at, updated_at
                ) VALUES ($1, $2, $3, $4, $5, $6, $7)
                ON CONFLICT (order_id) DO UPDATE SET
                    status = EXCLUDED.status,
                    updated_at = EXCLUDED.updated_at
            """,
                data['orderId'],
                data['customerId'],
                'CREATED',
                data['totalAmount'],
                len(data['items']),
                datetime.fromisoformat(data.get('createdAt', datetime.now().isoformat())),
                datetime.now(),
            )

    async def _on_payment_confirmed(self, data: dict):
        """Handle payment confirmed event - UPDATE read model"""
        async with self.pg_pool.acquire() as conn:
            await conn.execute("""
                UPDATE order_summary
                SET status = 'PAID',
                    payment_method = $2,
                    updated_at = $3
                WHERE order_id = $1
            """, data['orderId'], data['method'], datetime.now())

    async def _on_order_cancelled(self, data: dict):
        """Handle order cancelled event"""
        async with self.pg_pool.acquire() as conn:
            await conn.execute("""
                UPDATE order_summary
                SET status = 'CANCELLED',
                    cancelled_reason = $2,
                    updated_at = $3
                WHERE order_id = $1
            """, data['orderId'], data['reason'], datetime.now())

    async def _load_checkpoint(self) -> Optional[int]:
        """Load last processed position"""
        async with self.pg_pool.acquire() as conn:
            row = await conn.fetchrow(
                "SELECT position FROM projection_checkpoints WHERE name = $1",
                'order_summary'
            )
            return row['position'] if row else None

    async def _save_checkpoint(self, position: int):
        """Save processed position"""
        async with self.pg_pool.acquire() as conn:
            await conn.execute("""
                INSERT INTO projection_checkpoints (name, position, updated_at)
                VALUES ($1, $2, NOW())
                ON CONFLICT (name) DO UPDATE SET
                    position = EXCLUDED.position,
                    updated_at = NOW()
            """, 'order_summary', position)

The most important aspect of projection implementation is idempotency. The ON CONFLICT ... DO UPDATE clause in the code above is critical. When a projection fails midway and restarts, it may receive events that have already been processed. Without idempotent handling, data becomes corrupted.

Read Model Table Schema

-- Read model table used by projections
CREATE TABLE order_summary (
    order_id        VARCHAR(36) PRIMARY KEY,
    customer_id     VARCHAR(36) NOT NULL,
    status          VARCHAR(20) NOT NULL,
    total_amount    DECIMAL(15,2) NOT NULL,
    item_count      INTEGER NOT NULL,
    payment_method  VARCHAR(20),
    cancelled_reason TEXT,
    created_at      TIMESTAMP NOT NULL,
    updated_at      TIMESTAMP NOT NULL
);

-- Index optimized for customer-based order queries
CREATE INDEX idx_order_summary_customer ON order_summary(customer_id, status);

-- Index for status-based filtering
CREATE INDEX idx_order_summary_status ON order_summary(status, created_at DESC);

-- Projection checkpoint table
CREATE TABLE projection_checkpoints (
    name        VARCHAR(100) PRIMARY KEY,
    position    BIGINT NOT NULL,
    updated_at  TIMESTAMP NOT NULL
);

Read models are freely designed according to projection requirements. There is no need for normalization. You can run multiple projections simultaneously for customer dashboards, admin statistics, search engine indexing, and more. When a new view is needed, simply add a new projection and replay events from the beginning.

Snapshot Strategy

When an Aggregate accumulates thousands or tens of thousands of events, replaying all events each time becomes inefficient. Snapshots save the Aggregate state at specific points to reduce the replay range.

Snapshot Application Criteria

Introducing snapshots too early only increases complexity. Use the following criteria for judgment:

  • Consider introducing snapshots when the average number of events per Aggregate exceeds 50
  • Snapshots are needed when event replay time exceeds 100ms
  • Snapshots are typically created every 100 to 500 events

TypeScript Snapshot Implementation

// snapshots/snapshot-store.ts

interface Snapshot<T> {
  aggregateId: string
  aggregateType: string
  version: number // Aggregate version at snapshot time
  schemaVersion: number // Snapshot schema version (for upgrade detection)
  state: T
  createdAt: string
}

class SnapshotStore {
  private client: EventStoreDBClient
  private snapshotInterval: number

  constructor(client: EventStoreDBClient, snapshotInterval = 200) {
    this.client = client
    this.snapshotInterval = snapshotInterval
  }

  async saveSnapshot<T>(snapshot: Snapshot<T>): Promise<void> {
    const streamName = `snapshot-${snapshot.aggregateType}-${snapshot.aggregateId}`
    const event = jsonEvent({
      type: 'Snapshot',
      data: snapshot,
    })

    // Keep only the latest snapshot in the snapshot stream (maxCount setting)
    await this.client.appendToStream(streamName, [event])
    await this.client.setStreamMetadata(streamName, {
      maxCount: 3, // Keep only the 3 most recent for rollback capability
    })
  }

  async loadSnapshot<T>(
    aggregateType: string,
    aggregateId: string,
    currentSchemaVersion: number
  ): Promise<Snapshot<T> | null> {
    const streamName = `snapshot-${aggregateType}-${aggregateId}`

    try {
      const events = this.client.readStream(streamName, {
        direction: BACKWARDS,
        fromRevision: END,
        maxCount: 1,
      })

      for await (const resolved of events) {
        const snapshot = resolved.event!.data as Snapshot<T>

        // Ignore snapshot if schema version differs (full event replay)
        if (snapshot.schemaVersion !== currentSchemaVersion) {
          console.warn(
            `Snapshot schema mismatch for ${aggregateId}: ` +
              `expected ${currentSchemaVersion}, got ${snapshot.schemaVersion}. ` +
              `Replaying all events.`
          )
          return null
        }

        return snapshot
      }
    } catch (error) {
      // Return null if snapshot stream doesn't exist
      return null
    }

    return null
  }

  shouldTakeSnapshot(currentVersion: number, lastSnapshotVersion: number): boolean {
    return currentVersion - lastSnapshotVersion >= this.snapshotInterval
  }
}

Pay attention to the snapshot's schemaVersion field. When the Aggregate structure changes, existing snapshots are no longer valid. If the schemaVersion mismatches, the system ignores the snapshot and replays all events to generate a new snapshot. This is the advantage of event sourcing -- events are immutable, so they can always be replayed.

Note: Snapshots are a performance optimization technique, not a required component. The system must function correctly without snapshots. Implement automatic fallback to full event replay when snapshots are corrupted or invalidated.

Event Versioning

As production systems evolve, event schemas also change. Since stored events are never modified in event sourcing, a strategy for safely handling schema changes is needed.

Versioning Strategy Comparison

StrategyDescriptionSuitable ScenarioRisk
Weak SchemaUse default values for new fields, compatible with existing eventsWhen only field additions are neededLow
UpcastingTransform during event deserialization via middlewareField name changes, type changesMedium
New Event TypeDefine completely new event typeWhen the event meaning itself changesLow
Copy-Replace StreamClone stream with new schema and replaceLarge-scale schema migrationHigh

Upcasting Implementation Example

The most practical strategy is Upcasting. It places middleware that converts previous versions to the current version during the event deserialization process.

// versioning/event-upcaster.ts

type Upcaster = (event: any) => any

// Register upcasters per version
const upcasters: Map<string, Map<number, Upcaster>> = new Map()

// OrderCreated v1 -> v2: Changed shippingAddress to structured object
upcasters.set(
  'OrderCreated',
  new Map([
    [
      1,
      (event: any) => {
        // When shippingAddress was a single string in v1
        const address =
          typeof event.data.shippingAddress === 'string'
            ? {
                street: event.data.shippingAddress,
                city: 'UNKNOWN',
                zipCode: 'UNKNOWN',
                country: 'KR',
              }
            : event.data.shippingAddress

        return {
          ...event,
          eventVersion: 2,
          data: {
            ...event.data,
            shippingAddress: address,
            // Apply default value for fields added in v2
            orderSource: event.data.orderSource ?? 'WEB',
          },
        }
      },
    ],
  ])
)

function upcastEvent(event: any): any {
  const eventUpcasters = upcasters.get(event.eventType)
  if (!eventUpcasters) return event

  let current = event
  const targetVersion = Math.max(...Array.from(eventUpcasters.keys())) + 1

  // Apply sequentially from current version to latest version
  for (let v = current.eventVersion; v < targetVersion; v++) {
    const upcaster = eventUpcasters.get(v)
    if (upcaster) {
      current = upcaster(current)
    }
  }

  return current
}

// Usage example
// const rawEvent = loadFromEventStore();
// const currentEvent = upcastEvent(rawEvent);
// aggregate.apply(currentEvent);

The most important principle in event versioning is to never modify existing events. Data stored in the event store is immutable. Instead, transform at read time (Upcasting) or define new event types.

Disaster Recovery Procedures

Disaster recovery in event sourcing systems differs from traditional systems. Since the event store is the Single Source of Truth, projections can be rebuilt at any time.

Recovering from Projection Corruption

This procedure is used when a projection contains incorrect data or when there was a bug in the projection logic.

  1. Stop the projection service: Stop the subscription for the affected projection
  2. DROP or TRUNCATE the read model table: Remove existing incorrect data
  3. Reset the checkpoint: Delete the position for the affected projection from the projection_checkpoints table
  4. Restart the projection service: Rebuild the read model by replaying all events from the beginning of the event store
  5. Verify rebuild completion: Confirm that the projection has caught up to the current position

Warning: If there are hundreds of millions of events, rebuilding may take several hours. Design a projection architecture capable of parallel processing in advance, or prepare to rebuild by partition.

Event Store Cluster Failure

EventStoreDB uses a leader-follower architecture. When the leader node goes down, one of the followers is automatically promoted to leader. Points to note during this process:

  • Write failure handling: Retry writes that fail during leader transitions. Idempotency is guaranteed by expectedRevision
  • Subscription reconnection: Persistent Subscriptions reconnect automatically, but Catch-up Subscriptions must be manually reconnected from the last checkpoint
  • Split-brain prevention: Operate a minimum 3-node cluster to enable quorum-based leader election

Common Failure Scenarios and Responses

Failure ScenarioCauseResponse
Projection data inconsistencyProjection logic bugRebuild projection (full event replay)
Aggregate load failureEvent stream corruptionSnapshot fallback then partial replay, check cluster replicas
Concurrent write conflictSimultaneous Aggregate modificationCatch WrongExpectedVersionError and retry
Subscription lag (Consumer Lag)Insufficient projection processing speedPartitioning or horizontal scaling of projection instances
Event store disk fullEvent growthApply archiving policies, set stream maxAge/maxCount
Snapshot schema mismatchDeployment after Aggregate structure changeAutomatic fallback to full event replay

Operational Checklist

Items that must be verified before production deployment.

Design Phase Checklist

  • Does the event schema include an eventVersion field
  • Does the event metadata include correlationId and causationId
  • Is the Aggregate's invariant validation logic complete
  • Are error responses clear when commands fail
  • Is CQRS truly needed for this domain (reconfirm it's not simple CRUD)

Implementation Phase Checklist

  • Are projection handlers idempotent
  • Is optimistic concurrency control implemented (expectedRevision)
  • Are event serialization/deserialization tests written
  • Does the upcaster correctly convert previous version events
  • Does it fall back to full event replay when snapshot schema versions mismatch

Operations Phase Checklist

  • Is the event store cluster configured with 3 or more nodes
  • Is projection consumer lag monitoring set up
  • Are event store disk usage alerts configured
  • Is the projection rebuild procedure documented
  • Are snapshot creation frequency and retention policies configured
  • Is a Dead Letter Queue (DLQ) configured (for isolating unprocessable events)
  • Are manual event correction scripts prepared for failure scenarios

Common Mistakes and Anti-patterns

Here is a summary of frequently occurring mistakes in practice. Most causes of event sourcing project failures are found here.

Anti-pattern 1: Storing the Entire Current State in Events

// BAD: Putting the entire state in an event is no different from CRUD
interface OrderUpdated {
  eventType: 'OrderUpdated'
  data: {
    order: Order // Entire Order object
  }
}

// GOOD: Record only what changed as events
interface OrderItemAdded {
  eventType: 'OrderItemAdded'
  data: {
    orderId: string
    productId: string
    quantity: number
    unitPrice: number
  }
}

Anti-pattern 2: Calling External Services from Projections

If a projection handler calls external APIs, side effects occur during event replay. Projections should update the read model using only event data, like pure functions. Logic requiring external service calls should be separated into dedicated Policy handlers or Sagas.

Anti-pattern 3: Applying Event Sourcing to Every Domain

Event sourcing should be applied to domains where state change history has business value. Applying event sourcing to simple CRUD domains like user settings or code tables only increases complexity with no benefit. The correct strategy is to selectively apply it only to the system's Core Domain.

Anti-pattern 4: Accumulating 100,000+ Events in a Stream

When a single Aggregate stream accumulates over 100,000 events, loading time exceeds several seconds. Snapshots can mitigate this, but the root cause is poorly designed Aggregate boundaries. Consider splitting the Aggregate into smaller units or reviewing the periodic event generation pattern.

Event Sourcing vs Traditional CRUD Decision Criteria

Finally, here is a comparison table for reference when deciding whether to adopt event sourcing.

CriteriaTraditional CRUDEvent Sourcing
Current state queryingSimple (direct read)Complex (event replay or projection)
Change history trackingRequires separate audit logsAutomatically captured (events are history)
Schema changesALTER TABLE migrationEvent versioning + Upcasting
DebuggingOnly current state availableFull history replay for root cause analysis
Data consistencyACID transactionsAggregate-level consistency + eventual consistency
Storage spaceOnly current state storedAll events accumulated (higher storage cost)
Read performanceSufficient with index optimizationProjection design required
Initial development speedFastSlow (learning curve, infrastructure setup)
Complex domain adaptabilityLimited as domain complexity growsExtensibility through event-centric modeling
Team skill requirementsGeneralDDD and event modeling experience required

Event sourcing is an excellent choice for domains where change history itself has business value, such as financial transactions, logistics tracking, medical records, and collaboration tools. For other cases, it is more practical to maintain traditional CRUD and add audit logs only where needed.

Summary

Event sourcing and CQRS are powerful but complex patterns. To successfully adopt them, remember the following:

  • Invest the most time in event schema design. Events persist forever.
  • Projections must be implemented idempotently. Event replay must be safe for operations to be possible.
  • Snapshots are a performance optimization tool, not a required architectural component. Introduce them when needed.
  • Establish event versioning strategies early in the project. Adding them later is painful.
  • Do not apply to every domain. Apply selectively to core domains only.

References