- Published on
SI Industry AI Transformation: Korean Tech Giants AX Strategy and Engineer Survival Guide
- Authors

- Name
- Youngju Kim
- @fjvbn20031
- 1. The Current State of the SI Industry and AI Transformation
- 2. Deep Dive: AI Strategies of Korea's Top 3 SI Companies
- 3. AI Application Patterns in SI Projects (Real-World Cases)
- 4. AI Skill Transformation Required for SI Engineers
- 5. Career Strategies for SI Engineers in the AI Era
- 6. The Future of SI: 2026-2030 Outlook
- 7. 15 Interview Questions + Learning Roadmap
- Quiz
- References
1. The Current State of the SI Industry and AI Transformation
1-1. Global SI Market Size and Growth Outlook
The global Systems Integration (SI) market is valued at approximately 764 billion by 2030 (CAGR 6.7%). The core growth drivers are AI and cloud.
| Metric | Value |
|---|---|
| Global SI Market (2025) | $553 billion |
| Global SI Market (2030 projected) | $764 billion |
| CAGR (2025-2030) | 6.7% |
| Korea IT Services Market (2025) | Approx. KRW 28 trillion |
| AI-related SI project share | 40%+ (2026 forecast) |
1-2. Korea's IT Services Market Structure
Korea's IT services market has a unique chaebol-affiliated SI structure.
Tier 1 - Chaebol-Affiliated SI
- Samsung SDS: Samsung Group IT services + external business expansion
- LG CNS: LG Group IT services + accelerating independence through IPO
- SK C&C: SK Group IT services + AI/DT business expansion
Tier 2 - Mid-sized SI / Specialized Companies
- POSCO DX, Hyundai AutoEver, Hanwha Systems
- KT DS, NHN Cloud, Kakao Enterprise
Tier 3 - Global SI Korea Offices
- Accenture, Deloitte, IBM, Infosys
Tier 4 - SMB SI / Startups
- Thousands of small SI firms (subcontracting structure)
1-3. From DX to AX: What the Shift Means
DX (Digital Transformation) was about digitizing existing business. The focus was on cloud migration, mobile app development, and legacy system modernization.
AX (AI Transformation) goes a step further. It means integrating AI as a core business capability.
DX (2015-2023) AX (2024-2030+)
───────────────── ─────────────────
Cloud migration → AI-native architecture
Mobile apps → AI agent services
Data warehouse → AI/ML pipelines
RPA automation → Agentic AI automation
Dashboard reporting → Predictive/generative AI analytics
1-4. Why AI Matters for SI
According to the McKinsey 2024 Global AI Survey:
- 88% of organizations use AI in at least one business function
- 65% of organizations have adopted generative AI in one or more functions
- 25% of AI adopters confirmed cost reduction benefits
Gartner predicts that by 2028, 70% of AI applications will be built as multi-agent systems.
What does this mean for SI companies?
- AI extension of existing projects: Exploding demand for AI features in all enterprise systems
- New project types: AI agents, RAG systems, MLOps pipelines
- Workforce restructuring: Pressure to transition from traditional developers to AI engineers
- Pricing model changes: From man-month billing to AI performance-based models
2. Deep Dive: AI Strategies of Korea's Top 3 SI Companies
2-1. Samsung SDS: Full-Stack AI Strategy
Business Overview (2024)
| Metric | Value |
|---|---|
| Total Revenue | KRW 13.83 trillion |
| Cloud Business Revenue | KRW 2.32 trillion (+23.5% YoY) |
| Operating Profit | KRW 876 billion |
| Employees | Approx. 23,000 |
AI Strategy: From Infrastructure to Consulting
Samsung SDS's AI strategy takes a full-stack approach, covering everything from infrastructure to platform, applications, and consulting.
[Samsung SDS Full-Stack AI Architecture]
Layer 4: AI Consulting / Business Application
└── Industry-specific AI solutions (manufacturing, finance, logistics)
└── AI adoption consulting / PoC services
Layer 3: AI Applications
└── Brity Copilot (enterprise AI assistant)
└── Brity Automation (RPA + AI)
└── ChatGPT Enterprise (exclusive Korean reseller)
Layer 2: AI Platform
└── FabriX (AI/data integration platform)
└── Brity Works (workflow automation)
Layer 1: AI Infrastructure
└── Samsung Cloud Platform (SCP)
└── National AI Computing Center contract
└── GPU cluster operations
Key Product Analysis
FabriX - AI/Data Platform FabriX is Samsung SDS's enterprise AI platform. It provides an integrated environment for enterprises to train and deploy AI models using their own data.
Key features:
- Data collection/preprocessing pipelines
- AI model training/deployment management
- RAG (Retrieval-Augmented Generation) pipelines
- Prompt management and A/B testing
- Model monitoring and cost optimization
Brity Copilot - Enterprise AI Assistant Brity Copilot is an enterprise AI assistant already deployed at scale within Samsung Group.
Functional areas:
- Document summarization/generation (reports, emails, meeting notes)
- Code generation/review (for developers)
- Data analysis (SQL generation, chart creation)
- Internal knowledge search (RAG-based)
OpenAI ChatGPT Enterprise Exclusive Korean Reseller Samsung SDS exclusively distributes OpenAI's ChatGPT Enterprise in the Korean market. This is not simple reselling but an enterprise package combined with Samsung SDS's security/compliance solutions.
Hiring Trends
Samsung Group announced a plan to hire 60,000 people over 5 years, with AI/software talent as the core focus. Key hiring areas at Samsung SDS:
- AI/ML Engineers
- Cloud Architects
- Data Engineers
- AI Solution Consultants
2-2. LG CNS: IPO and AI Coding Platform
Business Overview (2024-2025)
| Metric | Value |
|---|---|
| IPO Raised | Approx. KRW 827 billion (approx. $570M) |
| Valuation | Approx. KRW 5.9 trillion (approx. $4.1B) |
| Cloud/AI Revenue | KRW 3.35 trillion (56% of total) |
| Employees | Approx. 7,600 |
AI Strategy: Software Productivity Revolution
LG CNS's AI strategy focuses on AI-ifying software development itself — automating the core SI processes of development, testing, and deployment with AI.
DevOn AI - AI-Based Development Automation Platform
DevOn AI is an ambitious platform that transforms the entire SDLC (Software Development Life Cycle) with AI.
[DevOn AI SDLC Automation]
Requirements Analysis → AI requirement organization/classification
Design → AI architecture suggestion/review
Development → AI code generation (boilerplate automation)
Testing → AI test case generation/execution
Deployment → AI-based CI/CD optimization
Operations → AI monitoring/failure prediction
Key features:
- Customized code generation through corporate codebase learning
- Legacy code analysis and modernization suggestions
- Automated code review and quality checks
- Automatic test code generation
Focus Industries
LG CNS concentrates AI transformation on three industries:
1. Financial AI
- Banking/insurance/securities AI consultation systems
- AI-based risk analysis
- Automated document processing (OCR + LLM)
2. Manufacturing AI
- Smart factory AI (quality prediction, predictive maintenance)
- Supply chain AI optimization
- Digital twin + AI simulation
3. Retail AI
- AI-based demand forecasting
- Personalized recommendation systems
- AI unmanned store solutions
2-3. SK C&C / SK Group: AI Infrastructure and Agents
Business Overview
| Metric | Value |
|---|---|
| SK Group AI Hiring (2025) | 8,000 (AI, semiconductors, digital) |
| SK Telecom AI Investment | KRW 1 trillion+/year |
| AI Data Center | Large-scale Ulsan center under construction |
AI Strategy: Infrastructure + Agent Dual Approach
AI Infrastructure Axis
- Ulsan AI Data Center: Large-scale GPU cluster operations
- NVIDIA partnership for AI computing services
- Cloud-based AI infrastructure service (CloudZ)
AI Agent Axis
- SK Telecom "A." Platform: Telecom-based AI agent
- Agent marketplace: Building a diverse AI agent ecosystem
- Enterprise AI agents: Business automation, customer service
Global AI Partnerships
- Microsoft: Azure-based AI service collaboration
- NVIDIA: AI infrastructure/computing partnership
- Anthropic, OpenAI: LLM model partnerships
SK's Differentiator
SK leverages the synergy of Telecom (SKT) + IT Services (SK C&C) + Semiconductors (SK hynix) to cover the entire AI value chain.
[SK AI Value Chain]
SK hynix: HBM (High Bandwidth Memory) → Core GPU component
↓
SK C&C: AI infrastructure / data centers
↓
SK Telecom: AI services / agent platform
↓
SK Affiliates: AI applications (energy, chemicals, bio, etc.)
3. AI Application Patterns in SI Projects (Real-World Cases)
Pattern 1: Existing System + AI Feature Addition
The most common pattern: Adding AI capabilities to enterprise systems already in production.
Representative Case: RAG-Based Internal Knowledge Search
Solving the problem where existing keyword-based internal portal search makes it difficult to find desired information.
# Simple RAG pipeline example
from langchain.vectorstores import Milvus
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
# 1. Vectorize internal documents
embeddings = OpenAIEmbeddings()
vectorstore = Milvus.from_documents(
documents=company_docs,
embedding=embeddings,
collection_name="internal_knowledge"
)
# 2. Configure RAG chain
qa_chain = RetrievalQA.from_chain_type(
llm=ChatOpenAI(model="gpt-4o"),
chain_type="stuff",
retriever=vectorstore.as_retriever(
search_kwargs={"k": 5}
)
)
# 3. Query
result = qa_chain.run("What is our annual leave policy this year?")
Architecture:
[User] → [Existing Portal UI] → [API Gateway]
↓
[AI Service Layer]
├── Embedding Service
├── Vector DB (Milvus)
├── LLM Service (GPT-4o/Claude)
└── Prompt Management
↓
[Existing Backend Systems]
├── Document Management System
├── Internal Wiki
└── Policies/Manuals DB
| Item | Details |
|---|---|
| Tech Stack | Python, FastAPI, LangChain, Milvus, OpenAI API |
| Expected Duration | 3-4 months |
| Team Composition | AI Engineer x2, Backend x1, Frontend x1, PM x1 |
| Key Challenge | Data preprocessing quality, hallucination prevention |
Pattern 2: AI-Based Business Automation (RPA to AI Agent)
Overcoming RPA limitations with AI: Traditional RPA is rule-based and fragile to variations. AI Agents understand context and respond flexibly.
# AI Agent-based document processing example
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain.tools import tool
@tool
def extract_invoice_data(document_path: str) -> dict:
"""Extract key data from invoice."""
# Parse invoice with OCR + LLM
ocr_text = run_ocr(document_path)
extracted = llm.extract_structured(
text=ocr_text,
schema=InvoiceSchema
)
return extracted
@tool
def validate_against_po(invoice_data: dict) -> dict:
"""Validate extracted invoice data against purchase order."""
po = fetch_purchase_order(invoice_data["po_number"])
discrepancies = compare(invoice_data, po)
return {"valid": len(discrepancies) == 0, "issues": discrepancies}
@tool
def submit_for_approval(invoice_data: dict, validation: dict) -> str:
"""Submit validated invoice to approval process."""
if validation["valid"]:
return approval_system.submit(invoice_data, auto_approve=True)
else:
return approval_system.submit(invoice_data, flag_review=True)
# Agent autonomously decides which tools to call in sequence
agent = create_openai_tools_agent(
llm=ChatOpenAI(model="gpt-4o"),
tools=[extract_invoice_data, validate_against_po, submit_for_approval],
prompt=agent_prompt
)
| Item | RPA (Before) | AI Agent (After) |
|---|---|---|
| Rule change handling | Manual script update | AI auto-adapts |
| Unstructured data | Cannot process | LLM understands/processes |
| Exception handling | Error then halt | Autonomous judgment/escalation |
| Maintenance cost | High (rule updates) | Low (prompt adjustments) |
Pattern 3: Data Analytics and Prediction
AI-Based Demand Forecasting and Anomaly Detection
# Demand forecasting + LLM interpretation example
import pandas as pd
from prophet import Prophet
from langchain.chat_models import ChatOpenAI
# 1. Time series forecasting (Prophet)
model = Prophet(yearly_seasonality=True)
model.fit(sales_data)
forecast = model.predict(future_dates)
# 2. LLM interprets forecast results and generates report
llm = ChatOpenAI(model="gpt-4o")
report = llm.invoke(f"""
Analyze the following demand forecast results and write an executive report:
- Forecast period: Next quarter
- Key figures: Average {forecast['yhat'].mean():.0f} orders/day
- Upper bound: {forecast['yhat_upper'].mean():.0f} orders/day
- Lower bound: {forecast['yhat_lower'].mean():.0f} orders/day
- YoY change: +12%
Derive 3 key insights with business context.
""")
Pattern 4: Conversational AI Services
Customer Support Chatbot / Internal Helpdesk
[Conversational AI Service Architecture]
Customer/Employee → [Chat Interface (Web/App/Teams)]
↓
[Conversation Management Layer]
├── Intent Classification
├── Conversation History (Memory)
└── Multi-turn Context Maintenance
↓
[AI Processing Layer]
├── RAG (Knowledge-based responses)
├── Function Calling (System integration)
└── Human Handoff (Agent transfer)
↓
[Backend Systems]
├── CRM / ERP
├── Knowledge Base
└── Ticket System
Pattern 5: Multi-Agent Workflow
Multi-agent system automating from document processing to approval
[Multi-Agent Document Processing Workflow]
[Document Intake Agent]
| Document classification, metadata extraction
↓
[Data Extraction Agent]
| Extract key data with OCR + LLM
↓
[Validation Agent]
| Compliance check, data consistency verification
↓
[Approval Routing Agent]
| Determine approver based on amount/type
↓
[Notification Agent]
Notify results, trigger follow-up actions
Each agent operates independently and receives the previous agent's output as input. On errors, it automatically retries or escalates to a human.
| Item | Details |
|---|---|
| Tech Stack | Python, LangGraph, CrewAI, FastAPI |
| Expected Duration | 4-6 months |
| Team Composition | AI Engineer x3, Backend x2, Domain Expert x1, PM x1 |
| Key Challenge | Inter-agent state management, error handling, performance optimization |
4. AI Skill Transformation Required for SI Engineers
4-1. Traditional SI Skills vs AI SI Skills
| Area | Traditional SI | AI SI |
|---|---|---|
| Primary Language | Java/Spring Boot | Python/FastAPI + Java |
| Database | Oracle, MySQL, PostgreSQL | Vector DB (Milvus, pgvector) + RDB |
| Infrastructure | On-premises/VM, WAS | Kubernetes + GPU Clusters |
| Architecture | Monolithic/MSA | AI Pipeline + MSA |
| Testing | Functional testing, performance testing | AI evaluation (LLM-as-judge), A/B testing |
| Documentation | Deliverables-focused (design docs, test results) | Prompt/pipeline docs + experiment logs |
| Project Management | Waterfall/Agile | Experiment-based iteration (MLOps cycle) |
| Client Communication | Feature requirements focus | Explaining AI limitations + expectation management |
4-2. Essential AI Tech Stack
Tier 1: Must Learn Within 6 Months
Python + FastAPI
├── Standard for AI service backends
├── Java developers will be amazed by FastAPI's simplicity
└── Async processing, Pydantic models essential
LangChain / LlamaIndex
├── De facto standard for RAG pipeline construction
├── Integration with various LLMs and vector DBs
└── Understanding Agent, Chain, Tool patterns essential
Vector DB
├── Milvus: Large-scale enterprise environments
├── pgvector: PostgreSQL-based, integrates with existing RDB
└── Understanding embedding generation/search/indexing principles
Prompt Engineering
├── System/User/Assistant role separation
├── Few-shot, Chain-of-Thought techniques
├── Prompt version management and A/B testing
└── Structured output (JSON mode) usage
Tier 2: Skills for Project Leadership (Within 12 Months)
MLflow / Weights & Biases
├── Experiment tracking and model version management
├── AI project reproducibility
└── Team-level collaboration workflows
Docker + Kubernetes (AI Workloads)
├── GPU node scheduling
├── AI model serving (TorchServe, vLLM, TGI)
├── Auto-scaling configuration
└── Model rolling updates
LangGraph / CrewAI
├── Multi-agent orchestration
├── State machine-based workflows
└── Human-in-the-Loop patterns
AI Security/Governance
├── Prompt injection defense
├── Data privacy (PII masking)
├── AI output filtering
└── Audit logging and monitoring
Tier 3: Skills for Architects and Leaders
AI System Architecture Design
├── LLM routing (cost/performance optimization)
├── Caching strategy (semantic caching)
├── Multi-model orchestration
└── Failure response/fallback design
Cost Optimization
├── Token usage optimization
├── Model selection strategy (small vs large models)
├── Batch processing vs real-time processing
└── GPU resource efficiency
AI Project Management
├── AI PoC → Pilot → Full rollout roadmap
├── AI project risk management
├── Data quality management framework
└── Client expectation management
4-3. Soft Skill Changes
SI engineers in the AI era need new soft skills beyond technical capabilities.
1. Ability to Explain AI Limitations to Clients
Client: "Make the AI give 100% accurate answers" Wrong response: "Yes, it's possible" Right response: "Current AI technology can achieve approximately 95% accuracy, and for the remaining 5%, we propose a Human-in-the-Loop approach where humans review the output"
2. AI Project Scoping (Avoiding Over-Promise)
The biggest risk in AI projects is over-promising. In traditional SI, you could say "This feature will take 3 months." But with AI, results vary dramatically based on data quality and model performance.
3. Data Governance Understanding
80% of AI projects is data preparation. Why SI engineers need to understand data governance:
- PIPA (Personal Information Protection Act) compliance
- Data quality management
- Training data bias verification
- Data pipeline design
4. Ethical AI Deployment Awareness
- Fairness verification of AI outputs
- Understanding Explainable AI (XAI) requirements
- AI usage disclosure and transparency
- Clear accountability assignment
5. Career Strategies for SI Engineers in the AI Era
5-1. Survival Strategy (Current SI Professionals)
If you're currently working in the SI industry, here is a step-by-step strategy.
Step 1: Python + AI Fundamentals (3 Months)
Month 1: Python Basics
├── Python syntax (Java developers can pick up quickly)
├── FastAPI basics (REST API development)
└── Project: Build a simple API server
Month 2: AI/ML Fundamentals
├── OpenAI API usage
├── Prompt engineering basics
├── LangChain introduction
└── Project: Build a simple chatbot
Month 3: RAG Fundamentals
├── Understanding embeddings and vector DBs
├── LangChain RAG pipeline
├── Basic prompt optimization
└── Project: FAQ chatbot (RAG-based)
Step 2: RAG Pipeline Construction Practice (2 Months)
Month 4: Advanced RAG
├── Chunking strategies (Semantic, Recursive)
├── Hybrid search (keyword + vector)
├── Reranking (Reranker models)
└── Project: Internal document search system
Month 5: Agents + Tool Use
├── Function Calling patterns
├── Multi-step agents
├── LangGraph basics
└── Project: Business automation agent
Step 3: Lead an Internal AI PoC (3 Months)
This is the most important step. Learning alone is not enough — you need real experience solving actual business problems with AI.
Recommended PoC topics:
- Internal document search AI (RAG)
- Code review automation
- Test case auto-generation
- Failure log analysis AI
- Customer inquiry auto-classification
Step 4: Secure an AI Project Lead Role
Leverage your PoC success to take on an actual AI project lead role.
[SI Engineer AI Transition Roadmap]
Current State 3 Months 6 Months 9 Months 12 Months
────────────────────────────────────────────────────────────────
Java Developer → Python → RAG Build → Internal PoC → AI Project Lead
Infra Engineer → K8s GPU → MLOps Learn → AI Infra PoC → AI Ops Engineer
PM/PL → AI Literacy → AI Planning → AI PoC PM → AI PM/Consultant
QA Engineer → AI Testing → LLM Eval → AI QA PoC → AI Quality Eng.
5-2. Career Transition Strategy (To AI-Focused Companies)
Path A: SI to AI Startup
Strengths: Domain expertise (finance, manufacturing, retail, etc.) + AI skills Weaknesses: Startup culture adaptation, fast pace requirements
Preparation:
- AI project portfolio on GitHub (minimum 3 projects)
- Domain-specific AI solution experience
- AI-related blog posts/presentations
Path B: SI to Big Tech Solutions
Strengths: Client-facing experience + technical skills Weaknesses: High technical interview bar
Target companies and roles:
- AWS Solutions Architect
- Google Cloud Customer Engineer
- Microsoft Azure Specialist
- Salesforce AI Consultant
Path C: SI to AI Consulting
Strengths: Business understanding + project management + AI capability Weaknesses: English proficiency required (for global firms)
Target companies:
- McKinsey (QuantumBlack)
- BCG (BCG X)
- Accenture AI
- Deloitte AI Center of Excellence
5-3. Strategy by Career Level
Junior (1-3 Years)
Core Goal: Build AI foundation + participate in hybrid projects
Recommended Activities:
├── Python + AI fundamentals learning (online courses)
├── Join/lead internal AI study groups
├── Volunteer for projects with AI components
├── Obtain AI-related certifications
│ ├── AWS Machine Learning Specialty
│ ├── Google Cloud Professional ML Engineer
│ └── Microsoft Azure AI Engineer
└── Publish personal AI projects on GitHub
Mid-Level (3-7 Years)
Core Goal: Lead AI-specific projects + establish specialization
Recommended Activities:
├── AI project sub-lead experience
├── Position yourself as domain AI expert
│ ├── Financial AI (AML, risk, underwriting)
│ ├── Manufacturing AI (predictive maintenance, quality)
│ └── Public Sector AI (administrative automation)
├── Accumulate AI architecture design experience
├── Present at internal/external AI seminars
└── Maintain a tech blog
Senior (7+ Years)
Core Goal: Choose AI Architect / AI PM / AI Consultant track
Track A - AI Architect:
├── Large-scale AI system design
├── Multi-model orchestration
├── AI infrastructure design (GPU clusters, MLOps)
└── Lead technical decision-making
Track B - AI PM/Director:
├── AI project portfolio management
├── AI business development (Pre-sales)
├── C-Level AI strategy advisory
└── AI team building and development
Track C - AI Consultant:
├── AI adoption strategy development
├── AI ROI analysis and business case
├── AI governance framework design
└── Industry-specific AI best practices
6. The Future of SI: 2026-2030 Outlook
6-1. How AI Changes SI
Gartner predicts that by 2028, 80% of coding tasks in SI projects will be automated by AI.
This does not mean SI engineers will disappear. Their roles will transform.
[SI Engineer Role Evolution]
2020-2024: Coder (writing code is the core job)
↓
2025-2027: AI-Augmented Developer (boosting productivity with AI tools)
↓
2028-2030: AI Orchestrator (designing and coordinating AI systems)
6-2. Multi-Agent SI
Future SI projects will have multiple AI agents collaborating to build systems.
[2028 SI Project Vision]
[Requirements AI Agent]
| Analyze client interviews, auto-generate requirements specs
↓
[Design AI Agent]
| Requirements-based architecture design, ERD generation
↓
[Development AI Agent]
| Design-based code auto-generation (80% automated)
↓
[Testing AI Agent]
| Test case generation, automated execution, bug reports
↓
[Deployment AI Agent]
| CI/CD pipeline execution, monitoring setup
↓
[SI Engineer (Human)]
Supervise entire process, client communication, exception handling
= "AI Orchestrator"
6-3. Three Characteristics of Surviving SI Engineers
1. Domain Expertise Even if AI generates code well, you need financial domain expertise to instruct "Design a system that complies with this financial regulation." AI is a tool, and knowing where to use the tool is a human skill.
2. AI Literacy The ability to use AI tools proficiently. Prompt writing, AI agent design, AI pipeline construction, etc. This is a different dimension of capability from coding.
3. Client Communication The area AI cannot replace. The ability to understand client business problems, translate them into AI solutions, and explain results. The essence of SI is "the bridge between technology and business," and only humans can play this bridge role.
7. 15 Interview Questions + Learning Roadmap
15 Interview Questions
Technical Questions (7)
-
Explain the components of a RAG pipeline and the role of each stage. Key Points: Be able to explain the full flow of document loading - chunking - embedding - vector storage - retrieval - context injection - LLM generation.
-
Explain the principle of similarity search in vector DBs, and the difference between cosine similarity and Euclidean distance. Key Points: Geometric meaning of embedding vectors, HNSW/IVF indexing, curse of dimensionality, etc.
-
Suggest 3 engineering methods to reduce LLM hallucinations. Key Points: RAG, prompt engineering (Chain-of-Thought), temperature adjustment, fact-checking tool integration, structured output.
-
What is prompt injection attack and how do you defend against it? Key Points: Direct vs indirect injection, input filtering, role separation, output validation, guardrail setup.
-
Design an architecture for adding AI features to an existing Java/Spring-based system. Key Points: Separate AI service as a microservice (Python/FastAPI), API Gateway integration, async processing, error handling.
-
Explain how to efficiently manage GPU resources in AI model serving. Key Points: Batch inference, model quantization, vLLM's PagedAttention, GPU time-sharing, auto-scaling.
-
Explain state management in a multi-agent system using LangGraph. Key Points: Graph-based state machine, node/edge definitions, conditional routing, checkpointing, Human-in-the-Loop.
Business/Strategy Questions (4)
-
How would you explain the ROI of an AI project to a client? Key Points: Quantitative metrics (processing time reduction, cost savings) + qualitative metrics (customer satisfaction, employee experience) + phased approach (prove in PoC).
-
How would you respond when a client says "Automate everything with AI"? Key Points: Set realistic expectations, identify AI-suitable areas, present phased adoption roadmap, design Human-in-the-Loop.
-
How would you define performance criteria (SLA) for AI models in an SI project? Key Points: Multi-dimensional SLA including accuracy, response time, availability, cost efficiency. Difference from traditional IT SLA (AI is probabilistic).
-
How can AI be leveraged in legacy system modernization projects? Key Points: AI code analysis (understanding legacy code), automated migration, auto-generated tests, auto-generated documentation.
Career/Culture Questions (4)
-
How have you built your AI capabilities as an SI engineer? Key Points: Specific learning path, real project application experience, continuous learning commitment.
-
How would you handle data quality issues in an AI project? Key Points: Data profiling, cleansing pipeline, client negotiation, data governance framework establishment.
-
What are the differences between traditional SI methodologies (Waterfall/Agile) and AI project methodologies? Key Points: Experiment-based approach, data dependency, uncertainty management, iterative performance improvement.
-
How do you expect the role of SI engineers to change in 5 years? Key Points: From coders to AI orchestrators, increasing importance of domain expertise, AI system design/supervision role.
Learning Roadmap
[SI Engineer AI Transition 12-Month Roadmap]
Phase 1 (Month 1-3): Foundations
├── Python programming
├── OpenAI API usage
├── Prompt engineering
└── Simple chatbot development
Phase 2 (Month 4-6): Practical Basics
├── LangChain/LlamaIndex
├── Vector DB (Milvus/pgvector)
├── RAG pipeline construction
└── AI service API design
Phase 3 (Month 7-9): Advanced
├── Multi-agent (LangGraph)
├── MLOps (MLflow, Docker, K8s)
├── AI security/governance
└── Internal AI PoC execution
Phase 4 (Month 10-12): Expert
├── AI architecture design
├── Cost optimization
├── AI project management
└── AI project lead role
Recommended Learning Resources:
| Category | Resource | Cost |
|---|---|---|
| Python Basics | Python for Everybody (Coursera) | Free |
| AI Basics | DeepLearning.AI - Generative AI with LLMs | Free |
| LangChain | LangChain Official Docs + Tutorials | Free |
| RAG | LlamaIndex Official Guide | Free |
| MLOps | Made With ML - MLOps Course | Free |
| Certification | AWS ML Specialty Prep | Exam fee ~$300 |
| Projects | Kaggle Competitions | Free |
| Community | AI Korea, PyTorch KR, LangChain Korea | Free |
Quiz
Test your understanding of the SI industry's AI transformation with these questions.
Q1. What is the projected global SI market size for 2030, and what are the key growth drivers?
A1. The global SI market is projected to grow to approximately 553 billion in 2025, CAGR 6.7%). The key growth drivers are AI and cloud. AI-related SI projects are expected to account for over 40% by 2026, with enterprise AX (AI Transformation) demand driving market growth.
Q2. What are the 4 layers in Samsung SDS's full-stack AI strategy?
A2. Samsung SDS's full-stack AI strategy consists of 4 layers:
- AI Infrastructure (Layer 1): Samsung Cloud Platform, GPU clusters, National AI Computing Center
- AI Platform (Layer 2): FabriX (AI/data integration platform), Brity Works
- AI Applications (Layer 3): Brity Copilot, ChatGPT Enterprise exclusive Korean reselling
- AI Consulting (Layer 4): Industry-specific AI solutions, AI adoption consulting
The differentiator is covering all areas from infrastructure to consulting, providing one-stop AI services to customers.
Q3. List the components of a RAG pipeline in order, and what is the most challenging stage in SI projects?
A3. RAG pipeline components (in order):
- Document Loading: Collecting enterprise data (PDFs, DBs, wikis, etc.)
- Chunking: Splitting documents into appropriate sizes
- Embedding: Converting text into vectors
- Vector Storage: Indexing in vector DB
- Retrieval: Searching for documents similar to user query
- Context Injection: Including search results in the prompt
- LLM Generation: Generating the final response
The most challenging stage in SI projects is document loading and chunking. Enterprise data exists in various formats (PDF, HWP, images, etc.), has security/access restrictions, and data quality is inconsistent.
Q4. What is the biggest mindset shift for traditional SI engineers transitioning to AI SI?
A4. The biggest mindset shift is moving from "deterministic results" to "probabilistic results."
In traditional SI, code produces 100% identical output for 100% identical input. But AI systems are probabilistic:
- Different answers can emerge from the same question
- 100% accuracy cannot be guaranteed
- Performance heavily depends on data quality
Therefore, AI projects require:
- Experiment-based approach (hypothesis - experiment - validation)
- Performance metric definition and continuous improvement
- Project methodologies that manage uncertainty
Q5. How is the role of SI engineers predicted to change by 2028, and what are the 3 key capabilities for survival?
A5. Gartner predicts that 80% of coding tasks in SI projects will be automated by AI by 2028. The SI engineer role will transform from "coder" to "AI orchestrator."
Three key capabilities for survival:
- Domain Expertise: Deep understanding of specific industries (finance, manufacturing, public sector). Even if AI generates code, humans decide "what to build."
- AI Literacy: Proficient use of AI tools. Prompt writing, AI agent design, AI pipeline construction, etc.
- Client Communication: Explaining AI limitations, translating business problems into AI solutions, and managing expectations. The essence of SI is "the bridge between technology and business."
References
- Grand View Research, "System Integration Market Size Report, 2025-2030"
- McKinsey Global Institute, "The State of AI in 2024: Gen AI adoption spikes"
- Gartner, "Top Strategic Technology Trends 2025: Agentic AI"
- Samsung SDS, "Annual Report 2024 - Cloud & AI Business"
- Samsung SDS, "FabriX Platform Documentation"
- Samsung SDS, "Brity Copilot Enterprise Guide"
- LG CNS, "IPO Prospectus 2025 - AI Strategy Overview"
- LG CNS, "DevOn AI Platform Whitepaper"
- SK Group, "AI/Digital Talent Recruitment Plan 2025"
- SK Telecom, "AI Agent Platform 'A.' Technical Overview"
- LangChain Documentation, "RAG Pipeline Best Practices"
- LlamaIndex Documentation, "Enterprise RAG Architecture"
- Gartner, "Predicts 2025: AI Will Transform 80% of SI Coding Tasks by 2028"
- IDC Korea, "Korea IT Services Market Forecast 2025-2029"
- KOSA (Korea Software Industry Association), "2025 Software Industry Survey"
- Deloitte, "2025 Global AI Transformation Survey"
- Accenture, "Total Enterprise Reinvention with AI"
- Forbes, "How Korean Tech Giants Are Leading Asia's AI Transformation"