Skip to content
Published on

GitOps in Practice: ArgoCD vs FluxCD Architecture Comparison and Production Deployment Strategies

Authors
  • Name
    Twitter
GitOps ArgoCD vs FluxCD

Introduction

In modern cloud-native environments, methods for managing Kubernetes deployments broadly fall into push-based and pull-based approaches. Traditional CI/CD pipelines use the push approach, running kubectl apply or helm upgrade to push changes into the cluster after builds complete. This approach has fundamental limitations: it requires granting cluster access credentials to CI servers, and it is difficult to detect drift caused by manual changes (kubectl edit, console modifications).

GitOps is an operational paradigm that addresses these problems by treating the Git repository as the Single Source of Truth, where an agent inside the cluster continuously compares the desired state in Git with the actual state and automatically synchronizes (Reconciliation) them. The core principles defined by the OpenGitOps project are:

  1. Declarative: The desired state of the system is described declaratively
  2. Versioned and Immutable: All change history remains in Git, enabling audit trails
  3. Pulled Automatically: Agents automatically apply the desired state to the cluster
  4. Continuously Reconciled: Drift is automatically detected and corrected

This article provides an in-depth comparison of ArgoCD and FluxCD architectures, the two primary tools for implementing GitOps, along with production deployment strategies, secret management, CI/CD integration, failure cases and recovery, and operational checklists.

ArgoCD Architecture Deep Dive

Core Components

ArgoCD is a CNCF Graduated project and a Kubernetes-native GitOps continuous deployment tool. It provides a web UI, CLI, and gRPC/REST API by default.

# ArgoCD Application resource example
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/org/k8s-manifests.git
    targetRevision: main
    path: apps/my-app/overlays/production
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
      allowEmpty: false
    syncOptions:
      - CreateNamespace=true
      - PrunePropagationPolicy=foreground
      - PruneLast=true
    retry:
      limit: 5
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 3m

The major components of ArgoCD are:

  • API Server: Provides gRPC/REST APIs used by the web UI, CLI, and CI/CD systems
  • Repo Server: Clones Git repositories and generates manifests. Supports Helm, Kustomize, Jsonnet, and more
  • Application Controller: Watches Application resources and runs reconciliation loops
  • Redis: Stores UI sessions and cache
  • Dex: Handles SSO (Single Sign-On) authentication
  • Notifications Controller: Sends sync status notifications via Slack, email, etc.

App of Apps Pattern

In large-scale environments, the App of Apps pattern manages multiple Applications through a single parent Application.

# Parent Application (root-app)
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: root-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/org/k8s-manifests.git
    targetRevision: main
    path: apps
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

By placing Application YAML files for each service inside the apps directory, root-app automatically creates and manages them.

ApplicationSet Controller

ApplicationSet dynamically creates multiple Applications from a single template. It is extremely useful in multi-cluster, multi-tenant, and monorepo environments.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: cluster-apps
  namespace: argocd
spec:
  generators:
    - clusters:
        selector:
          matchLabels:
            environment: production
  template:
    metadata:
      name: 'app-{{name}}'
    spec:
      project: default
      source:
        repoURL: https://github.com/org/k8s-manifests.git
        targetRevision: main
        path: 'apps/{{metadata.labels.region}}/production'
      destination:
        server: '{{server}}'
        namespace: production
      syncPolicy:
        automated:
          prune: true
          selfHeal: true

This ApplicationSet automatically creates Applications for all clusters with the environment: production label.

Sync Waves and Hooks

ArgoCD's Sync Waves provide precise control over resource deployment order.

# Wave 0: Create Namespace and RBAC first
apiVersion: v1
kind: Namespace
metadata:
  name: production
  annotations:
    argocd.argoproj.io/sync-wave: '0'

---
# Wave 1: ConfigMap and Secret
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
  namespace: production
  annotations:
    argocd.argoproj.io/sync-wave: '1'
data:
  DATABASE_HOST: 'postgres.production.svc'
  LOG_LEVEL: 'info'

---
# Wave 2: Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-server
  namespace: production
  annotations:
    argocd.argoproj.io/sync-wave: '2'
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api-server
  template:
    metadata:
      labels:
        app: api-server
    spec:
      containers:
        - name: api-server
          image: myregistry/api-server:v2.1.0
          ports:
            - containerPort: 8080

---
# PreSync Hook: DB Migration
apiVersion: batch/v1
kind: Job
metadata:
  name: db-migration
  namespace: production
  annotations:
    argocd.argoproj.io/hook: PreSync
    argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
  template:
    spec:
      containers:
        - name: migrate
          image: myregistry/db-migrate:v2.1.0
          command: ['./migrate', 'up']
      restartPolicy: Never

FluxCD Architecture Deep Dive

Core Components

FluxCD is a CNCF Graduated project and a Kubernetes-native GitOps toolkit. It consists of several independent controllers.

  • Source Controller: Manages sources including Git repositories, Helm repositories, OCI artifacts, and S3 buckets
  • Kustomize Controller: Watches Kustomization resources and applies manifests to the cluster
  • Helm Controller: Watches HelmRelease resources and manages Helm charts
  • Notification Controller: Handles event notifications and external webhook reception
  • Image Reflector/Automation Controllers: Detect new images in container registries and automatically update Git

GitRepository and Kustomization

FluxCD's basic workflow defines sources via GitRepository and describes how to apply them via Kustomization.

# Source definition
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
  name: k8s-manifests
  namespace: flux-system
spec:
  interval: 1m
  url: https://github.com/org/k8s-manifests.git
  ref:
    branch: main
  secretRef:
    name: git-credentials
  ignore: |
    # Exclude unnecessary files
    docs/
    README.md

---
# Application definition
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: production-apps
  namespace: flux-system
spec:
  interval: 5m
  retryInterval: 2m
  timeout: 3m
  sourceRef:
    kind: GitRepository
    name: k8s-manifests
  path: ./apps/production
  prune: true
  force: false
  targetNamespace: production
  healthChecks:
    - apiVersion: apps/v1
      kind: Deployment
      name: api-server
      namespace: production
  postBuild:
    substituteFrom:
      - kind: ConfigMap
        name: cluster-vars

HelmRelease Controller

FluxCD's Helm Controller declaratively manages the lifecycle of Helm charts through the HelmRelease CRD.

apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
  name: bitnami
  namespace: flux-system
spec:
  interval: 30m
  url: https://charts.bitnami.com/bitnami

---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
  name: postgresql
  namespace: production
spec:
  interval: 10m
  chart:
    spec:
      chart: postgresql
      version: '15.x'
      sourceRef:
        kind: HelmRepository
        name: bitnami
        namespace: flux-system
  values:
    primary:
      persistence:
        size: 50Gi
      resources:
        requests:
          memory: 512Mi
          cpu: 250m
    metrics:
      enabled: true
  upgrade:
    remediation:
      retries: 3
      remediateLastFailure: true
  rollback:
    cleanupOnFail: true
    timeout: 5m

FluxCD Dependency Management

Dependencies between Kustomization resources can be declared to control deployment order.

apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: infrastructure
  namespace: flux-system
spec:
  interval: 10m
  sourceRef:
    kind: GitRepository
    name: k8s-manifests
  path: ./infrastructure
  prune: true

---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: applications
  namespace: flux-system
spec:
  interval: 10m
  dependsOn:
    - name: infrastructure
  sourceRef:
    kind: GitRepository
    name: k8s-manifests
  path: ./applications
  prune: true

The applications Kustomization is only applied after the infrastructure Kustomization has been successfully applied.

ArgoCD vs FluxCD Comparison

FeatureArgoCDFluxCD
CNCF StatusGraduatedGraduated
Web UIYes (built-in, feature-rich)No (separate Weave GitOps UI available)
CLIargocd CLIflux CLI
ArchitectureMonolithic (multiple components as one)Microservices (independent controllers)
CRDsApplication, ApplicationSet, AppProjectGitRepository, Kustomization, HelmRelease, etc.
Multi-clusterHub-spoke modelFlux installed per cluster
Helm SupportYes (direct source reference)Yes (HelmRelease CRD)
KustomizeYesYes (native)
RBACCustom RBAC + SSO integrationKubernetes native RBAC
NotificationsNotification ControllerNotification Controller
Image Auto-updateImage Updater (separate install)Image Automation Controller
Drift DetectionYes (real-time UI display)Yes (event-based)
Sync OrderSync Wave + HookdependsOn dependencies
SSO SupportYes (Dex built-in)No (delegates to Kubernetes RBAC)
Multi-tenancyAppProject-basedNamespace-based
Git SupportGitHub, GitLab, Bitbucket, etc.GitHub, GitLab, Bitbucket, S3, OCI, etc.
Learning CurveMedium (UI helps)Higher (CLI-centric)
Resource UsageRelatively higherRelatively lower

Deployment Strategies

Blue-Green Deployment (Argo Rollouts)

Using Argo Rollouts with ArgoCD enables declarative Blue-Green deployments.

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: api-server
  namespace: production
spec:
  replicas: 5
  strategy:
    blueGreen:
      activeService: api-server-active
      previewService: api-server-preview
      autoPromotionEnabled: false
      prePromotionAnalysis:
        templates:
          - templateName: success-rate
        args:
          - name: service-name
            value: api-server-preview
      scaleDownDelaySeconds: 30
  selector:
    matchLabels:
      app: api-server
  template:
    metadata:
      labels:
        app: api-server
    spec:
      containers:
        - name: api-server
          image: myregistry/api-server:v2.2.0
          ports:
            - containerPort: 8080
          readinessProbe:
            httpGet:
              path: /healthz
              port: 8080
            initialDelaySeconds: 10
            periodSeconds: 5

---
apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
  name: success-rate
  namespace: production
spec:
  args:
    - name: service-name
  metrics:
    - name: success-rate
      interval: 30s
      count: 5
      successCondition: result[0] >= 0.95
      provider:
        prometheus:
          address: http://prometheus.monitoring:9090
          query: |
            sum(rate(http_requests_total{service="{{args.service-name}}",status=~"2.."}[5m])) /
            sum(rate(http_requests_total{service="{{args.service-name}}"}[5m]))

Canary Deployment (Flagger + FluxCD)

In FluxCD environments, Flagger automates canary deployments.

apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: api-server
  namespace: production
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: api-server
  service:
    port: 8080
    targetPort: 8080
  analysis:
    interval: 1m
    threshold: 5
    maxWeight: 50
    stepWeight: 10
    metrics:
      - name: request-success-rate
        thresholdRange:
          min: 99
        interval: 1m
      - name: request-duration
        thresholdRange:
          max: 500
        interval: 1m
    webhooks:
      - name: load-test
        type: rollout
        url: http://flagger-loadtester.production/
        metadata:
          cmd: 'hey -z 1m -q 10 -c 2 http://api-server-canary.production:8080/'

Secret Management

SOPS (Secrets OPerationS)

SOPS is a tool that encrypts secrets at the file level for safe storage in Git. FluxCD natively supports SOPS.

# .sops.yaml (placed at repository root)
creation_rules:
  - path_regex: .*.enc.yaml
    encrypted_regex: '^(data|stringData)$'
    age: age1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Encrypt secrets
sops --encrypt --in-place secrets/production/db-credentials.enc.yaml

# Enable SOPS decryption in FluxCD Kustomization
flux create kustomization production-secrets \
  --source=GitRepository/k8s-manifests \
  --path="./secrets/production" \
  --prune=true \
  --interval=10m \
  --decryption-provider=sops \
  --decryption-secret=sops-age

Sealed Secrets

Sealed Secrets uses asymmetric encryption to safely store secrets in Git.

# Create SealedSecret
kubectl create secret generic db-credentials \
  --from-literal=username=admin \
  --from-literal=password=supersecret \
  --dry-run=client -o yaml | \
  kubeseal --controller-name=sealed-secrets \
  --controller-namespace=kube-system \
  --format=yaml > sealed-db-credentials.yaml
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  name: db-credentials
  namespace: production
spec:
  encryptedData:
    username: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq...
    password: AgBy3i4OJSWK+PiTySYZZA9rO43cGDEq...
  template:
    type: Opaque
    metadata:
      name: db-credentials
      namespace: production

Vault Integration (ArgoCD Vault Plugin)

Integrate HashiCorp Vault with ArgoCD for dynamic secret injection.

apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
  namespace: production
  annotations:
    avp.kubernetes.io/path: 'secret/data/production/database'
type: Opaque
stringData:
  username: <username>
  password: <password>

In the manifest above, <username> and <password> are placeholders that the ArgoCD Vault Plugin replaces with actual secret values retrieved from Vault.

CI/CD Integration Patterns

GitHub Actions + ArgoCD

# .github/workflows/deploy.yml
name: Build and Deploy
on:
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build and Push Image
        run: |
          docker build -t myregistry/api-server:$GITHUB_SHA .
          docker push myregistry/api-server:$GITHUB_SHA

      - name: Update Kubernetes Manifests
        run: |
          git clone https://github.com/org/k8s-manifests.git
          cd k8s-manifests
          kustomize edit set image myregistry/api-server=myregistry/api-server:$GITHUB_SHA
          git add .
          git commit -m "chore: update api-server image to $GITHUB_SHA"
          git push

In this pattern, CI (GitHub Actions) is only responsible for building images and updating the manifest repository; the actual deployment is performed automatically by ArgoCD when it detects the Git change.

Failure Cases and Recovery Procedures

Case 1: Sync Loop (Infinite Synchronization Loop)

Situation: An ArgoCD Application transitioned to OutOfSync immediately after Sync completion, entering an infinite sync loop. The root cause was an Admission Webhook injecting default values into resources, creating differences from the Git manifests.

Symptoms:

  • Application status alternating between Synced and OutOfSync
  • ArgoCD API Server CPU usage spike
  • Repo Server memory usage increase

Recovery Procedure:

# 1. Pause auto-sync
argocd app set my-app --sync-policy none

# 2. Analyze drift cause
argocd app diff my-app --local ./manifests/

# 3. Set ignoreDifferences to ignore webhook-injected fields
argocd app set my-app --ignore-differences \
  'group=apps, kind=Deployment, jsonPointers=[/spec/template/metadata/annotations]'

# 4. Explicitly add webhook-injected defaults to manifests

# 5. Re-enable auto-sync
argocd app set my-app --sync-policy automated

Case 2: Drift Detection Failure

Situation: FluxCD Kustomization reported a success state, but manually changed resources in the actual cluster were not restored. The force option was disabled, and manual changes occurred in fields not managed by FluxCD.

Symptoms:

  • Kustomization status is Ready
  • Actual resource configuration differs from Git
  • Differences visible via kubectl diff

Recovery Procedure:

# 1. Trigger forced FluxCD reconciliation
flux reconcile kustomization production-apps --with-source

# 2. Enable force option (use server-side apply)
flux create kustomization production-apps \
  --source=GitRepository/k8s-manifests \
  --path="./apps/production" \
  --prune=true \
  --force=true \
  --interval=5m

# 3. Add OPA policies to prevent manual changes
# (restrict kubectl edit, kubectl patch, etc.)

Case 3: Helm Release Rollback Failure

Situation: A FluxCD HelmRelease upgrade failed but auto-rollback did not trigger. The remediateLastFailure option was set to false in the remediation configuration.

Symptoms:

  • HelmRelease status is Failed
  • Previous release version is also corrupted
  • Failed releases accumulating in Helm history

Recovery Procedure:

# 1. Check HelmRelease status
flux get helmrelease -n production

# 2. Check Helm release history
helm history postgresql -n production

# 3. Manual rollback
helm rollback postgresql 3 -n production

# 4. Add auto-remediation to HelmRelease configuration
flux create helmrelease postgresql \
  --source=HelmRepository/bitnami \
  --chart=postgresql \
  --chart-version="15.x" \
  --target-namespace=production \
  --values=values/postgresql.yaml \
  --remediation-retries=3

Production Deployment Checklist

Pre-Deployment

  • Design Git repository structure (monorepo vs polyrepo decision)
  • Establish branch strategy (main -> staging -> production)
  • Design RBAC policies (which teams manage which Applications/Kustomizations)
  • Choose secret management method (SOPS, Sealed Secrets, or Vault)

ArgoCD Configuration

  • Deploy in HA (High Availability) mode (minimum 3 replicas)
  • Configure Redis Sentinel or Redis Cluster
  • Set up SSO via Dex or OIDC
  • Restrict source repositories and target clusters per AppProject
  • Clarify management scope with Resource Exclusions/Inclusions

FluxCD Configuration

  • Install independent FluxCD on each cluster
  • Optimize Source Controller Git polling interval (default 1 minute)
  • Specify health check target resources in Kustomizations
  • Configure Slack/Teams notifications via Notification Controller

Operational Notes

  • Minimize write access to Git repositories (PR-based changes)
  • Require PR review before production deployments
  • Tune sync intervals and timeouts to match workload characteristics
  • Ensure ordering with Sync Waves or dependsOn for large manifest changes
  • Pin Helm chart versions instead of using version ranges
  • Document Git revert processes for rollbacks

Monitoring

  • ArgoCD: Collect Application sync status metrics (argocd_app_info)
  • FluxCD: Collect Kustomization/HelmRelease status metrics
  • Configure automatic alerts on sync failures (PagerDuty, OpsGenie integration)
  • Generate periodic drift audit reports

Conclusion

GitOps has become the standard for Kubernetes deployment management. Both ArgoCD and FluxCD, as CNCF Graduated projects, have mature ecosystems, and either tool can faithfully implement the core principles of GitOps.

ArgoCD excels with its rich web UI, multi-cluster management through ApplicationSet, and integration with Argo Rollouts. FluxCD excels with its Kubernetes-native microservice architecture, lightweight resource usage, and native SOPS support.

What matters more than the choice of tool is the establishment of a GitOps culture. Building processes where all changes go through Git, constructing guardrails to prevent manual changes, and establishing automation systems that continuously detect and correct drift are what truly complete GitOps.

References