- Authors

- Name
- Youngju Kim
- @fjvbn20031
- 1. ArgoCD Architecture Overview
- 2. API Server Internals
- 3. Repository Server Internals
- 4. Application Controller Internals
- 5. Sync Operation Detailed Analysis
- 6. Redis Internals
- 7. Dex Internals
- 8. AppProject and Access Control
- 9. gRPC Communication Architecture
- 10. Summary
1. ArgoCD Architecture Overview
ArgoCD is a declarative continuous delivery tool for implementing GitOps on Kubernetes. As a CNCF Graduated project, it continuously synchronizes the desired state defined in Git repositories with the actual state of clusters using a pull-based deployment model.
Core Components
ArgoCD consists of five core components:
- argocd-server (API Server): Provides gRPC/REST API for Web UI, CLI, and CI/CD integrations
- argocd-repo-server (Repository Server): Handles Git cloning and manifest generation
- argocd-application-controller (Application Controller): Monitors application state and performs synchronization
- Redis: Stores manifest caches and session data
- Dex: Provides SSO (Single Sign-On) authentication
Inter-Component Communication
All internal components communicate via gRPC:
User (Web UI / CLI / CI)
|
v
API Server (gRPC/REST)
|
+-------> Repository Server (gRPC)
| |
+-------> Application Controller
| |
+-------> Redis (TCP)
|
+-------> Dex (OIDC)
2. API Server Internals
Role and Responsibilities
The API Server serves as the front-end gateway for ArgoCD. It receives external requests and delegates them to internal components.
Dual gRPC and REST Interface
The API Server uses gRPC as its primary protocol and auto-generates REST APIs via grpc-gateway:
// API Server gRPC service registration
func (s *ArgoCDServer) Run(ctx context.Context, listeners ...net.Listener) {
grpcServer := grpc.NewServer(grpcOpts...)
application.RegisterApplicationServiceServer(grpcServer, s.appService)
repository.RegisterRepositoryServiceServer(grpcServer, s.repoService)
project.RegisterProjectServiceServer(grpcServer, s.projectService)
session.RegisterSessionServiceServer(grpcServer, s.sessionService)
}
Key gRPC Services
| Service | Description |
|---|---|
| ApplicationService | Application CRUD, Sync, Rollback |
| RepositoryService | Git repository registration/management |
| ClusterService | Target cluster registration/management |
| ProjectService | AppProject CRUD |
| SessionService | Login/logout, token management |
| AccountService | Local user account management |
Authentication Flow
1. User sends login request
2. API Server delegates OIDC auth to Dex (for SSO)
3. Dex authenticates against external IdP (GitHub, LDAP, etc.)
4. On success, JWT token is issued
5. Subsequent requests include JWT in Bearer header
6. API Server validates JWT and performs RBAC authorization
RBAC Model
ArgoCD's RBAC is implemented using the Casbin library:
# Default policy format
p, ROLE, RESOURCE, ACTION, OBJECT, EFFECT
# Example: grant proj-admin all application permissions for my-project
p, role:proj-admin, applications, *, my-project/*, allow
# Group mapping
g, my-github-team, role:proj-admin
Policies are stored in the argocd-rbac-cm ConfigMap. Key resources and actions:
| Resource | Actions |
|---|---|
| applications | get, create, update, delete, sync, override, action |
| repositories | get, create, update, delete |
| clusters | get, create, update, delete |
| projects | get, create, update, delete |
| logs | get |
| exec | create |
3. Repository Server Internals
Role and Responsibilities
The Repository Server fetches source from Git repositories and generates final Kubernetes manifests.
Git Clone Mechanism
1. Application Controller requests manifest generation from repo-server
2. repo-server clones Git repository (shallow clone by default)
3. Checks out specified revision (branch, tag, commit)
4. Detects manifests in the path and renders with appropriate tool
5. Returns result to Application Controller and caches in Redis
Manifest Generation Pipeline
The Repository Server supports various tools:
// Tool detection logic (simplified)
func detectTool(path string) string {
if fileExists(path, "Chart.yaml") {
return "helm"
}
if fileExists(path, "kustomization.yaml") ||
fileExists(path, "kustomization.yml") ||
fileExists(path, "Kustomization") {
return "kustomize"
}
return "directory" // plain YAML
}
Rendering by tool type:
| Tool | Detection Condition | Rendering Method |
|---|---|---|
| Helm | Chart.yaml exists | Run helm template |
| Kustomize | kustomization.yaml exists | Run kustomize build |
| Directory | Default | Direct YAML file loading |
| Plugin | plugin.yaml exists | Delegate to CMP sidecar |
Helm Rendering Process
1. Parse Chart.yaml for chart metadata
2. Run helm dependency build if dependencies exist
3. Load values files (default values.yaml + user overrides)
4. Run helm template to generate final manifests
5. Parse generated YAML and validate
Kustomize Rendering Process
1. Parse kustomization.yaml
2. Resolve bases/resources references
3. Apply patches and overlays
4. Apply namePrefix, nameSuffix, labels, annotations transforms
5. Output final YAML
Config Management Plugin (CMP)
CMP is implemented using the sidecar pattern:
# CMP configuration example - plugin.yaml
apiVersion: argoproj.io/v1alpha1
kind: ConfigManagementPlugin
metadata:
name: custom-jsonnet
spec:
version: v1.0
generate:
command: [jsonnet]
args: [main.jsonnet]
discover:
fileName: main.jsonnet
CMP sidecars are added to the repo-server Pod and communicate via Unix sockets.
Redis Caching Strategy
Cache key: HASH(git-url + revision + path + tool + params)
Cache TTL: Default 24 hours (ARGOCD_REPO_SERVER_CACHE_EXPIRATION)
Cache invalidation: On Git revision change
Caching prevents repetitive manifest generation for the same revision, significantly reducing Repository Server CPU/memory usage.
4. Application Controller Internals
Role and Responsibilities
The Application Controller is the core engine of ArgoCD, monitoring Kubernetes resources and performing synchronization.
Informer-Based Watch Mechanism
The Application Controller uses Kubernetes Informer patterns to efficiently track resource state:
// Informer-based resource watching (simplified)
func (c *ApplicationController) watchResources() {
factory := informers.NewSharedInformerFactory(clientset, resyncPeriod)
factory.Apps().V1().Deployments().Informer().AddEventHandler(
cache.ResourceEventHandlerFuncs{
AddFunc: c.onResourceAdd,
UpdateFunc: c.onResourceUpdate,
DeleteFunc: c.onResourceDelete,
},
)
}
Reconciliation Loop
The core of the Application Controller is its Reconciliation Loop:
Loop:
1. Query all Application resources
2. For each Application:
a. Request desired state (manifests) from Repository Server
b. Query actual state from target cluster
c. Compare desired vs actual state (Diff)
d. Update Sync Status (Synced / OutOfSync)
e. Update Health Status (Healthy / Degraded / Progressing, etc.)
f. Execute auto-sync if enabled
3. Wait until next cycle (default 180 seconds)
Diff Engine
ArgoCD's diff engine is based on Kubernetes 3-way merge:
Comparison targets:
- Last Applied Configuration (stored in annotation)
- Live State (actual cluster state)
- Desired State (manifests from Git)
Diff process:
1. Normalize Live State and Desired State
2. Remove defaults (fields auto-added by Kubernetes)
3. Exclude ignored fields (resourceVersion, generation, etc.)
4. Perform structural comparison of remaining fields
Normalization
Kubernetes auto-adds various defaults when storing resources. ArgoCD normalizes these to prevent unnecessary diffs:
// Normalization example
func normalizeResource(resource *unstructured.Unstructured) {
// Remove managed fields from metadata
removeFields(resource, "metadata.managedFields")
removeFields(resource, "metadata.resourceVersion")
removeFields(resource, "metadata.uid")
removeFields(resource, "metadata.generation")
removeFields(resource, "metadata.creationTimestamp")
// Remove status field (not declarative state)
removeFields(resource, "status")
}
Health Assessment
The Application Controller evaluates the health status of each resource:
| Status | Description |
|---|---|
| Healthy | Resource is operating normally |
| Progressing | Resource has not reached desired state yet (rollout in progress) |
| Degraded | Resource is in an unhealthy state (CrashLoopBackOff, etc.) |
| Suspended | Resource is paused |
| Missing | Resource does not exist in the cluster |
| Unknown | Status cannot be determined |
Custom Health Check (Lua)
ArgoCD supports custom Health Checks via Lua scripts:
-- Custom health check defined in argocd-cm ConfigMap
hs = {}
if obj.status ~= nil then
if obj.status.conditions ~= nil then
for _, condition in ipairs(obj.status.conditions) do
if condition.type == "Ready" and condition.status == "True" then
hs.status = "Healthy"
hs.message = condition.message
return hs
end
end
end
end
hs.status = "Progressing"
hs.message = "Waiting for resource to become ready"
return hs
5. Sync Operation Detailed Analysis
Sync Execution Flow
1. User or Auto-Sync triggers Sync
2. Application Controller requests latest manifests from Repository Server
3. Start sync operation after manifest generation completes
4. Execute PreSync hooks (if any)
5. Synchronize main resources (equivalent to kubectl apply)
6. Execute PostSync hooks (if any)
7. Verify resource state via Health Check
8. Update Sync Status
Resource Ordering
ArgoCD controls resource application order through Sync Waves and Phases:
# Sync Wave example
metadata:
annotations:
argocd.argoproj.io/sync-wave: '-1' # Lower numbers execute first
Default application order:
- Namespace
- NetworkPolicy
- ResourceQuota
- LimitRange
- ServiceAccount
- Role, ClusterRole
- RoleBinding, ClusterRoleBinding
- ConfigMap, Secret
- Service
- Deployment, StatefulSet, DaemonSet
- Ingress
Pruning
Prune deletes resources from the cluster that have been removed from Git:
Prune target identification:
1. Query list of resources in cluster
2. Filter resources with ArgoCD tracking label/annotation
3. Identify resources not present in current Git manifests
4. Delete according to cascade or foreground deletion policy
6. Redis Internals
Role
Redis serves two primary functions in ArgoCD:
Cache Store:
- Manifest caches from Repository Server
- Cluster state cache
- Application state information cache
Session Store:
- User login sessions
- JWT token-related data
Cache Key Structure
app|resources|CLUSTER_URL|NAMESPACE: resource tree cache
git|manifest|REPO_URL|REVISION|PATH: manifest cache
repo|connection|REPO_URL: connection state cache
7. Dex Internals
Role
Dex is an OIDC (OpenID Connect) Identity Provider that provides SSO capabilities to ArgoCD.
Supported Connectors
| Connector | Description |
|---|---|
| LDAP | Active Directory and LDAP server integration |
| SAML | SAML 2.0 protocol support |
| GitHub | GitHub OAuth authentication |
| GitLab | GitLab OAuth authentication |
| Google OIDC authentication | |
| OIDC | Generic OIDC provider integration |
Detailed Authentication Flow
1. User clicks SSO login in ArgoCD UI
2. API Server redirects to Dex authorization endpoint
3. Dex delegates auth to configured connector (e.g., GitHub)
4. User authenticates with external IdP
5. External IdP redirects to Dex callback URL
6. Dex generates ID Token (includes user info and groups)
7. API Server validates ID Token and issues ArgoCD JWT
8. JWT includes user group information (used for RBAC)
8. AppProject and Access Control
AppProject Resource Structure
AppProject is the logical isolation unit in ArgoCD:
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: team-alpha
namespace: argocd
spec:
description: 'Team Alpha project'
# Allowed source repositories
sourceRepos:
- 'https://github.com/org/team-alpha-*'
# Allowed target clusters/namespaces
destinations:
- server: 'https://kubernetes.default.svc'
namespace: 'team-alpha-*'
# Allowed cluster-scoped resources (restricted)
clusterResourceWhitelist:
- group: ''
kind: Namespace
# Denied namespace-scoped resources
namespaceResourceBlacklist:
- group: ''
kind: ResourceQuota
- group: ''
kind: LimitRange
# Role definitions
roles:
- name: developer
description: 'Development team role'
policies:
- p, proj:team-alpha:developer, applications, get, team-alpha/*, allow
- p, proj:team-alpha:developer, applications, sync, team-alpha/*, allow
groups:
- team-alpha-devs
Restriction Fields
| Field | Description |
|---|---|
| sourceRepos | Allowed Git repository URL patterns |
| destinations | Allowed deployment clusters and namespaces |
| clusterResourceWhitelist | Allowed cluster-scoped resources |
| namespaceResourceBlacklist | Denied namespace-scoped resources |
| signatureKeys | Required GPG signature keys |
| syncWindows | Allowed/denied sync time windows |
9. gRPC Communication Architecture
Internal Communication Flow
Application Controller
--gRPC--> Repository Server (manifest generation requests)
--gRPC--> API Server (status updates)
API Server
--gRPC--> Repository Server (repository validation)
--HTTP--> Dex (authentication)
--TCP---> Redis (cache/sessions)
Repository Server
--TCP---> Redis (manifest cache)
--HTTPS--> Git Server (repository clone)
TLS Configuration
Inter-component gRPC communication is protected by TLS by default:
- API Server generates and manages TLS certificates
- Each component loads certificates from argocd-server-tls Secret
- mTLS (mutual TLS) can be optionally enabled
10. Summary
ArgoCD architecture summarized:
- API Server: External interface + authentication/authorization gateway
- Repository Server: Single responsibility for manifest generation (Helm, Kustomize, Plain YAML, CMP)
- Application Controller: Core engine for state monitoring + sync execution
- Redis: Cache layer for performance optimization
- Dex: Authentication broker for SSO integration
These components work together via gRPC to form a stable and scalable GitOps platform. In the next post, we will dive deeper into the Sync engine internals.