- Published on
Vibe Coding Team Collaboration Guide: CI/CD, Code Quality & Conflict-Free Contribution
- Authors

- Name
- Youngju Kim
- @fjvbn20031
Vibe Coding Team Collaboration Guide: CI/CD, Code Quality & Conflict-Free Contribution
AI-powered Vibe Coding dramatically improves individual productivity, but introduces new challenges in team settings. AI-generated code may conflict with teammates' code, inconsistent styles can mix together, and CI/CD pipelines may struggle to keep pace with the speed of AI code generation. This guide comprehensively covers Git strategies, CI/CD pipelines, code quality management, and Skills file usage to help Vibe Coding teams collaborate efficiently.
1. Git Branching Strategy: Trunk-Based Development vs GitFlow
1.1 Why GitFlow Doesn't Fit Vibe Coding Teams
GitFlow uses a complex branch structure including release branches, develop branches, feature branches, and hotfix branches. This structure is inefficient in AI-assisted development.
GitFlow problems in Vibe Coding environments:
- The faster AI generates code, the longer feature branches live, causing merge hell
- Merge conflicts when integrating develop into main cause loss of AI context
- High overhead managing release branches
1.2 Trunk-Based Development (TBD)
TBD is a strategy where all developers commit to the main branch in short cycles (at least once per day). It is ideal for Vibe Coding teams.
Core TBD principles:
- Branch lifespan: 2 days maximum (use feature flags to merge directly to main)
- Frequent integration: Push to
main2-3 times per day - Full automation: CI must pass for every merge
- Small PRs: Under 400 lines recommended
TBD Branch Strategy:
main (always releasable)
├── feature/us-123-recommendation-api (lifespan: 1-2 days)
├── feature/us-124-websocket-updates (lifespan: 1-2 days)
└── fix/recommendation-null-check (lifespan: 4 hours)
Hiding unfinished code with feature flags:
// src/config/feature-flags.ts
export const FEATURE_FLAGS = {
RECOMMENDATION_V2: process.env.FEATURE_RECOMMENDATION_V2 === 'true',
WEBSOCKET_REAL_TIME: process.env.FEATURE_WEBSOCKET === 'true',
} as const;
// Usage in component
function HomePage() {
return (
<div>
{FEATURE_FLAGS.RECOMMENDATION_V2 && <RecommendationSection />}
<ProductList />
</div>
);
}
1.3 Git Rules for Vibe Coding Teams
# Recommended workflow
git checkout -b feature/us-123-recommendation-api
# Develop with Claude (multiple small commits)
git commit -m "feat(recommendation): add purchase history query"
git commit -m "test(recommendation): add unit tests for recommendation service"
git commit -m "feat(recommendation): add recommendation API endpoint"
# Create PR → CI passes → Code review → Merge
2. Conventional Commits + Automated Changelog
2.1 Conventional Commits Specification
Conventional Commits is a specification that gives structural meaning to commit messages. It is especially important in AI-assisted development.
Format:
type(scope): description
[optional body]
[optional footer(s)]
Type List:
| Type | Meaning | Version Impact |
|---|---|---|
| feat | New feature | Minor version bump |
| fix | Bug fix | Patch version bump |
| docs | Documentation only | None |
| style | Formatting, semicolons | None |
| refactor | Refactoring | None |
| test | Add/modify tests | None |
| chore | Build config, packages | None |
| perf | Performance improvement | Patch version bump |
| BREAKING CHANGE | Breaks backward compatibility | Major version bump |
Example:
feat(recommendation): add purchase history based product recommendations
- Implement collaborative filtering algorithm
- Add RecommendationService with top-10 product selection
- Exclude already purchased products from recommendations
Closes #123
2.2 Automated Changelog Generation
# Install conventional-changelog
npm install --save-dev @commitlint/cli @commitlint/config-conventional
npm install --save-dev conventional-changelog-cli
# .commitlintrc.json
{
"extends": ["@commitlint/config-conventional"]
}
# package.json scripts
{
"scripts": {
"changelog": "conventional-changelog -p angular -i CHANGELOG.md -s",
"release": "standard-version"
}
}
Auto-generated CHANGELOG example:
# Changelog
## [1.3.0] - 2026-03-17
### Features
- **recommendation:** add purchase history based product recommendations
### Bug Fixes
- **cart:** fix quantity update when adding duplicate items
### Performance
- **recommendation:** add Redis caching for recommendation results
3. PR Strategy: Small PRs, AI-Assisted Code Review, Branch Protection
3.1 The Principle of Small PRs
In Vibe Coding environments, AI rapidly generates code and it is tempting to create large PRs. Resist this temptation.
PR Size Guidelines:
- Ideal: under 200 lines
- Maximum: 400 lines (exceptional)
- Never: PRs over 1000 lines
How to split large PRs:
- Vertical split: Split by layer (DB layer PR → Service layer PR → API PR)
- Horizontal split: Use feature flags, separate tests from implementation
- Refactoring + feature split: Refactoring in a separate PR
3.2 Automated AI-Assisted Code Review
Automated review with Claude in GitHub Actions:
# .github/workflows/ai-code-review.yml
name: AI Code Review
on:
pull_request:
types: [opened, synchronize]
jobs:
ai-review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get PR diff
id: diff
run: |
git diff origin/main...HEAD > pr_diff.txt
echo "diff_size=$(wc -l < pr_diff.txt)" >> $GITHUB_OUTPUT
- name: AI Review with Claude API
if: steps.diff.outputs.diff_size < 1000
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
node scripts/ai-review.js
3.3 Branch Protection Rules
GitHub Branch Protection Rules setup:
# Set via GitHub CLI
gh api repos/OWNER/REPO/branches/main/protection \
--method PUT \
--field required_status_checks='{"strict":true,"contexts":["ci/build","ci/test","ci/lint"]}' \
--field enforce_admins=true \
--field required_pull_request_reviews='{"required_approving_review_count":1,"dismiss_stale_reviews":true}' \
--field restrictions=null
Required status checks:
ci/build: Build succeedsci/test: Tests pass (80%+ coverage)ci/lint: ESLint, Prettier passci/security: Security vulnerability scan
CODEOWNERS setup:
# .github/CODEOWNERS
# Core architecture files
/src/core/ @senior-dev @tech-lead
/src/domain/ @domain-expert
# Security related
/src/auth/ @security-team
*.env* @security-team
# CI/CD config
/.github/ @devops-team
/docker/ @devops-team
4. GitHub Actions CI/CD Pipeline
4.1 Complete CI/CD Pipeline Example
# .github/workflows/ci-cd.yml
name: CI/CD Pipeline
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
NODE_VERSION: '20'
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
# ─── CI Stage ────────────────────────────────────────
lint:
name: Lint & Format Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm run lint
- run: npm run format:check
type-check:
name: TypeScript Type Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm run type-check
test-unit:
name: Unit Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm run test:unit -- --coverage
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
flags: unit
test-integration:
name: Integration Tests
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_PASSWORD: testpassword
POSTGRES_DB: testdb
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
redis:
image: redis:7
ports:
- 6379:6379
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npm run test:integration
env:
DATABASE_URL: postgresql://postgres:testpassword@localhost:5432/testdb
REDIS_URL: redis://localhost:6379
security-scan:
name: Security Scan
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: 'fs'
scan-ref: '.'
format: 'sarif'
output: 'trivy-results.sarif'
- name: Upload Trivy scan results
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'
- name: npm audit
run: npm audit --audit-level=high
mutation-test:
name: Mutation Testing
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- run: npx stryker run
- name: Comment mutation score on PR
uses: actions/github-script@v7
with:
script: |
const fs = require('fs');
const report = JSON.parse(fs.readFileSync('reports/mutation/mutation.json'));
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `Mutation Score: ${report.metrics.mutationScore.toFixed(1)}%`
});
# ─── CD Stage (main branch only) ──────────────────
build-image:
name: Build & Push Docker Image
runs-on: ubuntu-latest
needs: [lint, type-check, test-unit, test-integration, security-scan]
if: github.ref == 'refs/heads/main'
outputs:
image-digest: ${{ steps.build.outputs.digest }}
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: build
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: [build-image]
environment: staging
steps:
- uses: actions/checkout@v4
- name: Deploy to Kubernetes Staging
run: |
kubectl set image deployment/app \
app=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}@${{ needs.build-image.outputs.image-digest }} \
--namespace=staging
kubectl rollout status deployment/app --namespace=staging
deploy-production:
name: Deploy to Production
runs-on: ubuntu-latest
needs: [deploy-staging]
environment:
name: production
url: https://myapp.com
steps:
- uses: actions/checkout@v4
- name: Deploy to Production
run: |
kubectl set image deployment/app \
app=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}@${{ needs.build-image.outputs.image-digest }} \
--namespace=production
kubectl rollout status deployment/app --namespace=production
5. Skills Files: Shared Team AI Workflows
5.1 What Are Skills Files?
Skills files (in the .claude/commands/ directory) are reusable AI workflows shared by the team. They are executed with slash commands like /commit and /pr-description.
Directory structure:
.claude/
commands/
commit.md
pr-description.md
test.md
review.md
docs.md
refactor.md
security-check.md
changelog.md
5.2 Key Skills File Examples
/commit — Automatic commit message generation:
# commit
Analyzes staged changes and generates a commit message following the Conventional Commits specification.
## Rules
- Use feat/fix/docs/style/refactor/test/chore/perf types
- scope is the affected module/component (optional)
- Subject: under 50 chars, present tense, lowercase first letter
- Body: explain what and why was changed
- Add BREAKING CHANGE in footer if applicable
## Output format
type(scope): description
- Change 1
- Change 2
Closes #issue-number (if applicable)
/pr-description — Automatic PR description generation:
# pr-description
Analyzes changes in the current branch and generates a PR description.
## Output format
### Summary of Changes
[1-3 line summary]
### Work Done
- [ ] Implementation item 1
- [ ] Implementation item 2
### How to Test
1. Step-by-step testing instructions
### Screenshots (if UI changes)
[Add screenshots here]
### Related Issues
Closes #issue-number
### Checklist
- [ ] Unit tests written
- [ ] Integration tests passing
- [ ] CLAUDE.md rules followed
- [ ] Documentation updated (if needed)
/test — Automatic test case generation:
# test
Generates comprehensive tests for the selected file or function.
## Includes
1. Unit tests (Jest/Vitest)
- Happy path cases
- Edge cases (boundary values, null, undefined)
- Error cases
2. Mock setup
3. Property-based tests (for complex business logic)
## Rules
- Follow test coverage goals in CLAUDE.md
- Use AAA pattern (Arrange/Act/Assert)
- Use async/await for async tests
/review — Perform code review:
# review
Performs an in-depth code review on current changes or specified files.
## Review Checklist
### Code Quality
- [ ] SOLID principles compliance
- [ ] DRY principle (no duplicate code)
- [ ] Single responsibility for functions/classes
- [ ] Naming conventions (per CLAUDE.md)
### Security
- [ ] SQL injection risk
- [ ] XSS vulnerabilities
- [ ] Sensitive data exposure (secrets, PII)
- [ ] Authentication/authorization validation
### Performance
- [ ] N+1 query problems
- [ ] Unnecessary re-renders
- [ ] Memory leak risks
### Testing
- [ ] Sufficient test coverage
- [ ] Edge case coverage
## Output Format
For each issue: [Critical/Major/Minor] description + improved code example
/refactor — Refactoring suggestions:
# refactor
Generates refactoring suggestions for selected code.
## Analysis Criteria
1. Extract Method: Split long functions into meaningful functions
2. Extract Class: Separate classes violating Single Responsibility
3. Replace Conditional with Polymorphism: Remove if/else chains
4. Introduce Parameter Object: Functions with 3+ parameters
5. Remove Duplicate Code: Consolidate duplicated logic
## Output Format
- Explanation of problems in original code
- Refactored code
- Reason for change
- Additional considerations
/docs — Automatic documentation generation:
# docs
Generates documentation for selected code.
## Generated Items
1. JSDoc/TSDoc comments
2. README.md (module or function description)
3. API documentation (OpenAPI/Swagger format)
## Rules
- Include code examples
- Explain parameter types and meanings
- Describe return values
- Document exception cases
6. Static Analysis: ESLint, Prettier, SonarQube
6.1 ESLint Configuration
// eslint.config.js (ESLint v9 flat config)
import js from '@eslint/js'
import typescript from '@typescript-eslint/eslint-plugin'
import tsParser from '@typescript-eslint/parser'
import prettier from 'eslint-config-prettier'
import importPlugin from 'eslint-plugin-import'
import sonarjs from 'eslint-plugin-sonarjs'
export default [
js.configs.recommended,
{
files: ['**/*.ts', '**/*.tsx'],
languageOptions: {
parser: tsParser,
parserOptions: {
project: './tsconfig.json',
},
},
plugins: {
'@typescript-eslint': typescript,
import: importPlugin,
sonarjs: sonarjs,
},
rules: {
// TypeScript rules
'@typescript-eslint/no-explicit-any': 'error',
'@typescript-eslint/strict-null-checks': 'error',
'@typescript-eslint/no-unused-vars': 'error',
'@typescript-eslint/explicit-function-return-type': 'warn',
// Complexity limits
complexity: ['error', 10],
'max-lines-per-function': ['warn', 30],
'max-depth': ['error', 3],
// SonarJS code quality
'sonarjs/no-duplicate-string': 'warn',
'sonarjs/cognitive-complexity': ['error', 15],
'sonarjs/no-identical-functions': 'error',
// Import ordering
'import/order': [
'error',
{
groups: ['builtin', 'external', 'internal', 'parent', 'sibling'],
'newlines-between': 'always',
},
],
},
},
prettier,
]
6.2 Prettier Configuration
{
"semi": true,
"trailingComma": "es5",
"singleQuote": true,
"printWidth": 100,
"tabWidth": 2,
"useTabs": false,
"bracketSpacing": true,
"arrowParens": "avoid",
"endOfLine": "lf"
}
6.3 SonarQube Integration
# .github/workflows/sonarqube.yml
name: SonarQube Analysis
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
sonar:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- run: npm run test:unit -- --coverage
- name: SonarQube Scan
uses: sonarsource/sonarqube-scan-action@master
env:
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
with:
args: >
-Dsonar.projectKey=my-project
-Dsonar.javascript.lcov.reportPaths=coverage/lcov.info
-Dsonar.qualitygate.wait=true
sonar-project.properties:
sonar.projectKey=my-ecommerce-platform
sonar.sources=src
sonar.tests=src
sonar.test.inclusions=**/*.spec.ts,**/*.test.ts
sonar.coverage.exclusions=**/*.spec.ts,**/*.test.ts,**/index.ts
7. Conflict Prevention Strategies
7.1 Minimizing Conflicts with Feature Flags
Feature flags are a powerful tool for safely merging unfinished code into the main branch.
// src/config/feature-flags.ts
import { z } from 'zod'
const featureFlagSchema = z.object({
RECOMMENDATION_V2: z.boolean().default(false),
WEBSOCKET_REAL_TIME: z.boolean().default(false),
NEW_CHECKOUT_FLOW: z.boolean().default(false),
AI_SEARCH: z.boolean().default(false),
})
function loadFeatureFlags() {
return featureFlagSchema.parse({
RECOMMENDATION_V2: process.env.FEATURE_RECOMMENDATION_V2 === 'true',
WEBSOCKET_REAL_TIME: process.env.FEATURE_WEBSOCKET === 'true',
NEW_CHECKOUT_FLOW: process.env.FEATURE_NEW_CHECKOUT === 'true',
AI_SEARCH: process.env.FEATURE_AI_SEARCH === 'true',
})
}
export const FEATURES = loadFeatureFlags()
7.2 Minimizing Conflicts with Modular Architecture
Clearly dividing boundaries so team members work on different modules can greatly reduce merge conflicts.
src/
modules/
user/ # Team member A owns
domain/
application/
infrastructure/
presentation/
product/ # Team member B owns
order/ # Team member C owns
recommendation/ # Team member D owns
shared/ # Shared code (minimize changes)
utils/
types/
errors/
7.3 Using AI to Resolve Conflicts
A merge conflict occurred. Analyze both changes and suggest the correct merge:
<<<<<<< HEAD (my changes)
[paste code]
=======
[paste their code]
>>>>>>> feature/recommendation
Identify the intent of both changes and generate a merged result that preserves both.
8. Technical Debt Management with AI
8.1 Technical Debt Patterns in AI-Generated Code
Rapid development with Vibe Coding can accumulate technical debt.
Common technical debt in AI-generated code:
- Over-abstraction (unnecessary interfaces/layers)
- Unnecessary dependency additions
- Tightly coupled code that's hard to test
- Inconsistent error handling generated without context
- Duplicate logic (AI regenerates logic unaware of other files)
8.2 Regular Technical Debt Reviews
# Technical Debt Review Prompt (at Sprint end)
Analyze the following codebase and identify technical debt:
[list of changed files]
Analysis items:
1. Duplicate code (DRY violations)
2. Excessive complexity (cyclomatic complexity over 10)
3. Unnecessary abstraction layers
4. Inconsistent error handling
5. Areas with low test coverage
For each item:
- Location (file, line)
- Severity (Critical/Major/Minor)
- Estimated refactoring time
- Recommended approach
8.3 Tech Debt Sprint Allocation
# Allocate 20% of every Sprint to resolving technical debt
# GitHub Issues label strategy
labels:
- name: tech-debt
color: '#e4e669'
description: Technical debt issue
- name: ai-generated
color: '#7c3aed'
description: AI-generated code needs review
- name: needs-refactor
color: '#f97316'
description: Needs refactoring
Quiz
Q1: Why is Trunk-Based Development (TBD) more advantageous than GitFlow for Vibe Coding teams?
Answer: TBD has short branch lifespans (2 days max), enabling rapid integration of AI-generated code, minimizing merge conflicts, and enabling continuous integration.
Explanation: In GitFlow, feature branches can live for days to weeks. When AI rapidly generates code, branches live even longer, causing frequent merge conflicts. TBD merges to main 1-2 times per day, so AI-generated code is rapidly integrated and the whole team stays on the latest codebase. Feature flags allow safely merging even incomplete features.
Q2: How is BREAKING CHANGE expressed in Conventional Commits and what version impact does it have?
Answer: Add BREAKING CHANGE: description in the commit message footer, or append an exclamation mark (!) after the type. This increments the Major version in Semantic Versioning.
Explanation: Example: feat!: remove legacy API endpoint or in footer: BREAKING CHANGE: /api/v1/users endpoint removed, use /api/v2/users. In Semantic Versioning (SemVer), changes that break backward compatibility increment the Major version (1.0.0 → 2.0.0). Automated changelog tools (standard-version, release-please) detect this and automatically update versions.
Q3: What are the 3 main benefits of Skills files (/commit, /review, etc.)?
Answer: Maintains consistent AI workflows across the entire team, automates repetitive work with reusable prompts, and automatically applies team rules (coding conventions, quality standards) to AI.
Explanation: Without Skills files, each team member uses different prompts, resulting in inconsistent quality and style in AI output. Version-controlling (Git) Skills files allows managing team AI workflows just like code. For example, the /commit command always generates messages in Conventional Commits format, and /review automatically applies the team's security checklist.
Q4: What are the 4 core metrics in SonarQube Quality Gate?
Answer: New code coverage (80%+), duplicate code ratio (3% or less), Blocker issues (0), Critical issues (0).
Explanation: SonarQube Quality Gate automatically checks code quality standards before PR merges. New code coverage is the test ratio for added code. Duplicate code detects DRY principle violations. Blocker/Critical issues find security vulnerabilities or bugs. Integrating SonarQube into CI/CD pipelines automatically blocks merging PRs below the quality threshold.
Q5: What are the 3 main reasons to use feature flags?
Answer: Safely integrating unfinished code into the main branch (TBD support), gradual rollout to specific users/environments, and enabling A/B testing and rapid rollback.
Explanation: Without feature flags, using TBD would expose unfinished features in production. Managing feature flags as environment variables allows running different feature sets in development, staging, and production from the same codebase. In Vibe Coding environments where AI rapidly adds features, the strategy of gradually activating only completed features with feature flags is critical.
Q6: What are 3 common technical debt patterns in AI-generated code?
Answer: Unnecessary abstraction (excessive interfaces/layers), duplicate logic (AI unaware of other files' context), and inconsistent error handling.
Explanation: Because AI only knows information within its context window, it often regenerates utility functions that already exist in other files. Also, different error handling patterns are used across different AI sessions, reducing code consistency. Technical debt must be managed by specifying forbidden patterns in CLAUDE.md and through regular technical debt review Sprints.
Summary
Success in Vibe Coding team collaboration rests on three core pillars:
1. Git Strategy: Maintain short-lived branches and frequent integration with Trunk-Based Development. Safely manage unfinished code with feature flags, and implement automated releases with Conventional Commits.
2. CI/CD Automation: Fully automate lint, type-check, unit/integration tests, security scans, Docker builds, and deployments with GitHub Actions. Every PR must pass CI before it can be merged.
3. Code Quality: Enforce code quality standards with ESLint + Prettier + SonarQube, and standardize team AI workflows with Skills files. Maintain the quality of AI-generated code through regular technical debt reviews.
Teams with these three pillars well-established can convert AI's speed into team-wide productivity gains.