CI/CD con Jenkins: Automatiza tus Pipelines de Integración y Despliegue Continuo

Jenkins se mantiene como una de las plataformas de CI/CD más robustas y flexibles del ecosistema DevOps, potenciando miles de organizaciones desde startups hasta enterprises globales. Con su arquitectura extensible, amplio ecosistema de plugins y capacidades de pipeline as code, Jenkins continúa evolucionando para satisfacer las demandas modernas de entrega continua.
Esta guía completa te llevará desde los fundamentos hasta implementaciones enterprise avanzadas, proporcionando el conocimiento práctico necesario para diseñar, implementar y escalar pipelines de CI/CD robustos que impulsen la velocidad de entrega sin comprometer la calidad.
Fundamentos de Jenkins en el Ecosistema CI/CD Moderno
Arquitectura y Componentes Core
Jenkins opera sobre una arquitectura master-agent (controller-node) que permite escalabilidad horizontal y distribución de workloads:
Componentes Principales:
- Jenkins Controller: Servidor central que coordina builds, gestiona UI y almacena configuraciones
- Jenkins Agents: Nodos workers que ejecutan jobs específicos
- Workspace: Directorio temporal donde se ejecutan builds en cada agent
- Build Queue: Cola de jobs pendientes de ejecución
- Plugin Manager: Sistema de extensiones que amplía funcionalidades core
Ventajas de la Arquitectura Distribuida:
- Escalabilidad horizontal mediante múltiples agents
- Aislamiento de builds por tipo de tecnología o entorno
- Utilización óptima de recursos computacionales
- Resiliencia mediante redundancia de agents
Instalación y Configuración Enterprise
Instalación con Docker para Desarrollo
# Dockerfile.jenkins
FROM jenkins/jenkins:lts-alpine
# Instalar como root para gestión de dependencias
USER root
# Instalar dependencias del sistema
RUN apk add --no-cache \
docker \
docker-compose \
kubectl \
curl \
jq \
git \
openssh-client
# Instalar plugins esenciales
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN jenkins-plugin-cli --plugin-file /usr/share/jenkins/ref/plugins.txt
# Configuración de seguridad inicial
COPY jenkins.yaml /var/jenkins_home/jenkins.yaml
COPY jobs/ /var/jenkins_home/jobs/
# Volver a usuario jenkins
USER jenkins
# Configurar JVM para performance
ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false -Xmx2g -XX:+UseG1GC"
ENV JENKINS_OPTS="--httpPort=8080 --httpsPort=8443"
EXPOSE 8080 8443 50000
# plugins.txt - Lista de plugins esenciales
blueocean:1.25.2
pipeline-stage-view:2.21
docker-pipeline:1.26
kubernetes:1.30.1
git:4.8.2
github:1.34.1
credentials:2.6.1
workflow-aggregator:2.6
pipeline-utility-steps:2.12.0
slack:2.45
junit:1.54
jacoco:3.3.0
sonarqube:2.13.1
ansible:1.1
terraform:1.0.10
Deployment Productivo en Kubernetes
# jenkins-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: jenkins
labels:
name: jenkins
---
# jenkins-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: jenkins
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
namespace: jenkins
# jenkins-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: jenkins
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
serviceAccountName: jenkins
securityContext:
fsGroup: 1000
runAsUser: 1000
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /var/jenkins_home"]
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- containerPort: 8080
- containerPort: 50000
resources:
limits:
memory: "4Gi"
cpu: "2000m"
requests:
memory: "2Gi"
cpu: "1000m"
env:
- name: JAVA_OPTS
value: "-Djenkins.install.runSetupWizard=false -Xmx2g -XX:+UseG1GC"
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
livenessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 10
failureThreshold: 12
readinessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 10
failureThreshold: 3
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
namespace: jenkins
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: gp2
Pipeline as Code con Jenkinsfile
Pipeline Declarativo Avanzado
// Jenkinsfile - Pipeline completo para aplicación microservicio
pipeline {
agent none
options {
buildDiscarder(logRotator(
numToKeepStr: '10',
daysToKeepStr: '30'
))
timeout(time: 30, unit: 'MINUTES')
skipStagesAfterUnstable()
parallelsAlwaysFailFast()
timestamps()
}
environment {
// Variables globales
REGISTRY_URL = 'your-registry.com'
IMAGE_NAME = 'myapp'
KUBECONFIG = credentials('kubeconfig-prod')
DOCKER_REGISTRY_CREDS = credentials('docker-registry')
SONAR_TOKEN = credentials('sonarqube-token')
// Variables dinámicas
BUILD_VERSION = "${env.BUILD_NUMBER}-${env.GIT_COMMIT.take(7)}"
IMAGE_TAG = "${REGISTRY_URL}/${IMAGE_NAME}:${BUILD_VERSION}"
}
stages {
stage('Checkout & Environment Setup') {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: git
image: alpine/git:latest
command:
- cat
tty: true
"""
}
}
steps {
container('git') {
checkout scm
script {
// Determinar estrategia de deployment basado en branch
env.DEPLOY_ENVIRONMENT = env.BRANCH_NAME == 'main' ? 'production' :
env.BRANCH_NAME == 'develop' ? 'staging' : 'development'
// Configurar notificaciones según criticidad
env.NOTIFICATION_CHANNELS = env.DEPLOY_ENVIRONMENT == 'production' ?
'#alerts,#devops' : '#development'
}
}
// Notificar inicio del pipeline
slackSend(
channel: env.NOTIFICATION_CHANNELS,
color: '#FFFF00',
message: ":construction: Pipeline started for ${env.JOB_NAME} - ${env.BUILD_VERSION}"
)
}
}
stage('Code Quality & Security Analysis') {
parallel {
stage('Static Code Analysis') {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: sonarqube
image: sonarsource/sonar-scanner-cli:latest
command:
- cat
tty: true
"""
}
}
steps {
container('sonarqube') {
script {
def sonarArgs = [
"-Dsonar.projectKey=${env.JOB_NAME}",
"-Dsonar.sources=src/",
"-Dsonar.host.url=${SONAR_HOST_URL}",
"-Dsonar.login=${SONAR_TOKEN}",
"-Dsonar.branch.name=${env.BRANCH_NAME}"
]
// Agregar análisis de coverage si existe
if (fileExists('coverage/lcov.info')) {
sonarArgs.add("-Dsonar.javascript.lcov.reportPaths=coverage/lcov.info")
}
sh "sonar-scanner ${sonarArgs.join(' ')}"
}
}
// Esperar resultado de quality gate
timeout(time: 5, unit: 'MINUTES') {
waitForQualityGate abortPipeline: env.DEPLOY_ENVIRONMENT == 'production'
}
}
}
stage('Security Scanning') {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: security-scanner
image: securecodewarrior/docker-security-scanner:latest
command:
- cat
tty: true
"""
}
}
steps {
container('security-scanner') {
// Escaneo de dependencias vulnerables
sh '''
if [ -f "package.json" ]; then
npm audit --audit-level=moderate --json > npm-audit.json || true
fi
if [ -f "requirements.txt" ]; then
safety check --json -r requirements.txt > safety-report.json || true
fi
if [ -f "go.mod" ]; then
nancy sleuth --json > nancy-report.json || true
fi
'''
// Análisis SAST con Semgrep
sh '''
semgrep --config=auto --json --output=semgrep-results.json . || true
'''
}
// Archivar reportes de seguridad
archiveArtifacts artifacts: '*-audit.json,*-report.json,semgrep-results.json',
allowEmptyArchive: true
}
}
}
}
stage('Build & Test') {
parallel {
stage('Unit Tests') {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: node
image: node:16-alpine
command:
- cat
tty: true
"""
}
}
steps {
container('node') {
sh 'npm ci'
sh 'npm run test:unit -- --coverage --ci'
// Publicar resultados de tests
publishTestResults testResultsPattern: 'test-results.xml'
// Publicar cobertura
publishCoverageReports([
[
reportType: 'LCOV',
reportName: 'Coverage Report',
reportDir: 'coverage',
reportFiles: 'lcov.info'
]
])
}
}
post {
always {
junit 'test-results.xml'
}
}
}
stage('Integration Tests') {
when {
anyOf {
branch 'main'
branch 'develop'
changeRequest target: 'main'
}
}
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: node
image: node:16-alpine
command:
- cat
tty: true
- name: redis
image: redis:alpine
- name: postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
value: testpassword
- name: POSTGRES_DB
value: testdb
"""
}
}
steps {
container('node') {
sh '''
# Esperar a que servicios estén listos
sleep 10
# Configurar variables de test
export DATABASE_URL="postgresql://postgres:testpassword@localhost:5432/testdb"
export REDIS_URL="redis://localhost:6379"
# Ejecutar tests de integración
npm run test:integration
'''
}
}
}
}
}
stage('Build Docker Image') {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: docker
image: docker:latest
command:
- cat
tty: true
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
"""
}
}
steps {
container('docker') {
script {
// Login en registry
sh "echo '${DOCKER_REGISTRY_CREDS_PSW}' | docker login ${REGISTRY_URL} -u '${DOCKER_REGISTRY_CREDS_USR}' --password-stdin"
// Build multi-stage con cache
sh """
docker build \
--target production \
--build-arg BUILD_DATE=\$(date -u +"%Y-%m-%dT%H:%M:%SZ") \
--build-arg VCS_REF=${env.GIT_COMMIT} \
--build-arg VERSION=${BUILD_VERSION} \
--cache-from ${REGISTRY_URL}/${IMAGE_NAME}:latest \
-t ${IMAGE_TAG} \
-t ${REGISTRY_URL}/${IMAGE_NAME}:latest \
.
"""
// Security scan de la imagen
sh """
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy:latest image --format json --output trivy-report.json ${IMAGE_TAG} || true
"""
// Push imagen
sh "docker push ${IMAGE_TAG}"
// Push latest solo para main branch
if (env.BRANCH_NAME == 'main') {
sh "docker push ${REGISTRY_URL}/${IMAGE_NAME}:latest"
}
}
}
// Archivar reporte de seguridad
archiveArtifacts artifacts: 'trivy-report.json', allowEmptyArchive: true
}
}
stage('Deploy') {
when {
anyOf {
branch 'main'
branch 'develop'
}
}
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: kubectl
image: bitnami/kubectl:latest
command:
- cat
tty: true
- name: helm
image: alpine/helm:latest
command:
- cat
tty: true
"""
}
}
steps {
container('helm') {
script {
// Configurar kubeconfig
sh 'mkdir -p ~/.kube && cp $KUBECONFIG ~/.kube/config'
// Determinar valores específicos por entorno
def helmValues = ""
if (env.DEPLOY_ENVIRONMENT == 'production') {
helmValues = "values-production.yaml"
} else if (env.DEPLOY_ENVIRONMENT == 'staging') {
helmValues = "values-staging.yaml"
} else {
helmValues = "values-development.yaml"
}
// Deploy con Helm
sh """
helm upgrade --install \
${IMAGE_NAME}-${env.DEPLOY_ENVIRONMENT} \
./helm/${IMAGE_NAME} \
--namespace ${env.DEPLOY_ENVIRONMENT} \
--create-namespace \
--values helm/${IMAGE_NAME}/${helmValues} \
--set image.tag=${BUILD_VERSION} \
--set image.repository=${REGISTRY_URL}/${IMAGE_NAME} \
--wait \
--timeout 10m
"""
// Verificar deployment
sh """
kubectl rollout status deployment/${IMAGE_NAME} \
--namespace ${env.DEPLOY_ENVIRONMENT} \
--timeout=300s
"""
// Ejecutar smoke tests
if (env.DEPLOY_ENVIRONMENT != 'development') {
sh """
kubectl run smoke-test-${BUILD_NUMBER} \
--image=curlimages/curl:latest \
--restart=Never \
--namespace ${env.DEPLOY_ENVIRONMENT} \
--command -- /bin/sh -c \
"curl -f http://${IMAGE_NAME}:8080/health || exit 1"
kubectl wait --for=condition=complete \
--timeout=60s \
pod/smoke-test-${BUILD_NUMBER} \
--namespace ${env.DEPLOY_ENVIRONMENT}
kubectl delete pod smoke-test-${BUILD_NUMBER} \
--namespace ${env.DEPLOY_ENVIRONMENT}
"""
}
}
}
}
}
stage('Post-Deploy Validation') {
when {
anyOf {
branch 'main'
branch 'develop'
}
}
parallel {
stage('Performance Testing') {
when {
branch 'main'
}
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: k6
image: loadimpact/k6:latest
command:
- cat
tty: true
"""
}
}
steps {
container('k6') {
sh '''
k6 run --out json=performance-results.json \
--summary-export=performance-summary.json \
tests/performance/load-test.js
'''
}
// Archivar resultados de performance
archiveArtifacts artifacts: 'performance-*.json', allowEmptyArchive: true
// Analizar resultados y fallar si no cumple SLA
script {
def performanceData = readJSON file: 'performance-summary.json'
def avgResponseTime = performanceData.metrics.http_req_duration.avg
def errorRate = performanceData.metrics.http_req_failed.rate
if (avgResponseTime > 2000 || errorRate > 0.01) {
error("Performance tests failed: avg response time: ${avgResponseTime}ms, error rate: ${errorRate}")
}
}
}
}
stage('Security Validation') {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: zap
image: owasp/zap2docker-stable:latest
command:
- cat
tty: true
"""
}
}
steps {
container('zap') {
sh '''
# Ejecutar escaneo OWASP ZAP
zap-baseline.py \
-t http://${IMAGE_NAME}.${DEPLOY_ENVIRONMENT}.svc.cluster.local:8080 \
-J zap-report.json \
-w zap-report.md || true
'''
}
// Archivar reporte de seguridad
archiveArtifacts artifacts: 'zap-report.*', allowEmptyArchive: true
}
}
}
}
}
post {
always {
// Limpiar workspace
cleanWs()
}
success {
slackSend(
channel: env.NOTIFICATION_CHANNELS,
color: '#00FF00',
message: ":white_check_mark: Pipeline completed successfully for ${env.JOB_NAME} - ${env.BUILD_VERSION}"
)
// Notificación específica para producción
script {
if (env.DEPLOY_ENVIRONMENT == 'production') {
slackSend(
channel: '#releases',
color: '#00FF00',
message: ":rocket: ${env.JOB_NAME} v${env.BUILD_VERSION} deployed to PRODUCTION"
)
}
}
}
failure {
slackSend(
channel: env.NOTIFICATION_CHANNELS,
color: '#FF0000',
message: ":x: Pipeline failed for ${env.JOB_NAME} - ${env.BUILD_VERSION}\nCheck: ${env.BUILD_URL}"
)
// Para producción, también notificar por email
script {
if (env.DEPLOY_ENVIRONMENT == 'production') {
emailext(
subject: "URGENT: Production deployment failed - ${env.JOB_NAME}",
body: "Production deployment failed. Please check: ${env.BUILD_URL}",
to: "${env.PRODUCTION_ALERT_EMAILS}"
)
}
}
}
unstable {
slackSend(
channel: env.NOTIFICATION_CHANNELS,
color: '#FFAA00',
message: ":warning: Pipeline completed with warnings for ${env.JOB_NAME} - ${env.BUILD_VERSION}"
)
}
}
}
Integración con Kubernetes y Ecosistema Cloud Native
Jenkins Operator para Gestión Declarativa
# jenkins-operator.yaml
apiVersion: jenkins.io/v1alpha2
kind: Jenkins
metadata:
name: jenkins-production
namespace: jenkins
spec:
configurationAsCode:
configurations:
- name: jenkins-configuration
jenkinsAPISettings:
authorizationStrategy: serviceAccount
master:
basePlugins:
- name: kubernetes
version: "1.30.1"
- name: workflow-job
version: "2.42"
- name: workflow-aggregator
version: "2.6"
- name: git
version: "4.8.2"
- name: job-dsl
version: "1.77"
- name: configuration-as-code
version: "1.51"
- name: kubernetes-credentials-provider
version: "0.18"
disableCSRFProtection: false
containers:
- name: jenkins-master
image: jenkins/jenkins:lts
imagePullPolicy: Always
livenessProbe:
failureThreshold: 12
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 80
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 3
httpGet:
path: /login
port: http
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 2
memory: 4Gi
requests:
cpu: 1
memory: 2Gi
seedJobs:
- id: jenkins-configuration
targets: "cicd/jobs/*.groovy"
description: "Jenkins DSL jobs"
repositoryBranch: main
repositoryUrl: https://github.com/your-org/jenkins-config.git
Configuration as Code (JCasC)
# jenkins.yaml - Configuración completa como código
jenkins:
systemMessage: "Jenkins Production Environment - Managed by Configuration as Code"
numExecutors: 0
clouds:
- kubernetes:
name: "kubernetes-cloud"
serverUrl: "https://kubernetes.default.svc.cluster.local:443"
serverCertificate: ${KUBERNETES_SERVER_CERTIFICATE}
skipTlsVerify: false
namespace: "jenkins"
credentialsId: "k8s-service-account"
jenkinsUrl: "http://jenkins:8080"
jenkinsTunnel: "jenkins:50000"
connectTimeout: 0
readTimeout: 0
containerCapStr: "100"
maxRequestsPerHostStr: "64"
retentionTimeout: 5
templates:
- name: "maven-build-pod"
label: "maven"
nodeUsageMode: NORMAL
containers:
- name: "maven"
image: "maven:3.8-openjdk-11"
ttyEnabled: true
command: "/bin/sh -c"
args: "cat"
resourceRequestMemory: "1Gi"
resourceLimitMemory: "2Gi"
resourceRequestCpu: "500m"
resourceLimitCpu: "1"
- name: "docker"
image: "docker:latest"
ttyEnabled: true
command: "/bin/sh -c"
args: "cat"
resourceRequestMemory: "512Mi"
resourceLimitMemory: "1Gi"
resourceRequestCpu: "250m"
resourceLimitCpu: "500m"
volumes:
- hostPathVolume:
hostPath: "/var/run/docker.sock"
mountPath: "/var/run/docker.sock"
- name: "node-build-pod"
label: "nodejs"
nodeUsageMode: NORMAL
containers:
- name: "node"
image: "node:16-alpine"
ttyEnabled: true
command: "/bin/sh -c"
args: "cat"
resourceRequestMemory: "1Gi"
resourceLimitMemory: "2Gi"
resourceRequestCpu: "500m"
resourceLimitCpu: "1"
securityRealm:
ldap:
configurations:
- server: "${LDAP_SERVER}"
rootDN: "${LDAP_ROOT_DN}"
inhibitInferRootDN: false
userSearchBase: "ou=users"
userSearch: "uid={0}"
groupSearchBase: "ou=groups"
managerDN: "${LDAP_MANAGER_DN}"
managerPasswordSecret: "${LDAP_MANAGER_PASSWORD}"
displayNameAttributeName: "cn"
mailAddressAttributeName: "mail"
authorizationStrategy:
projectMatrix:
permissions:
- "Overall/Administer:devops-admins"
- "Overall/Read:authenticated"
- "Job/Build:developers"
- "Job/Read:developers"
- "Job/Workspace:developers"
- "Job/Configure:senior-developers"
- "Job/Create:senior-developers"
crumbIssuer:
standard:
excludeClientIPFromCrumb: false
remotingSecurity:
enabled: true
security:
globalJobDslSecurityConfiguration:
useScriptSecurity: true
scriptApproval:
approvedSignatures:
- "method groovy.json.JsonSlurperClassic parseText java.lang.String"
- "method java.lang.String replaceAll java.lang.String java.lang.String"
credentials:
system:
domainCredentials:
- credentials:
- usernamePassword:
id: "docker-registry"
username: "${DOCKER_REGISTRY_USERNAME}"
password: "${DOCKER_REGISTRY_PASSWORD}"
description: "Docker Registry Credentials"
- string:
id: "sonarqube-token"
secret: "${SONARQUBE_TOKEN}"
description: "SonarQube Analysis Token"
- kuberneteServiceAccount:
id: "k8s-service-account"
description: "Kubernetes Service Account"
tool:
git:
installations:
- name: "Default"
home: "git"
maven:
installations:
- name: "Maven-3.8"
properties:
- installSource:
installers:
- maven:
id: "3.8.6"
nodejs:
installations:
- name: "NodeJS-16"
properties:
- installSource:
installers:
- nodeJSInstaller:
id: "16.17.0"
npmPackagesRefreshHours: 72
unclassified:
location:
url: "https://jenkins.company.com"
adminAddress: "devops@company.com"
mailer:
smtpHost: "${SMTP_HOST}"
smtpPort: ${SMTP_PORT}
charset: "UTF-8"
useSsl: true
username: "${SMTP_USERNAME}"
password: "${SMTP_PASSWORD}"
slackNotifier:
botUser: true
teamDomain: "${SLACK_TEAM_DOMAIN}"
token: "${SLACK_TOKEN}"
room: "#devops"
sonarGlobalConfiguration:
installations:
- name: "SonarQube"
serverUrl: "${SONARQUBE_SERVER_URL}"
credentialsId: "sonarqube-token"
jobs:
- script: |
multibranchPipelineJob('microservice-template') {
displayName('Microservice CI/CD Template')
description('Template pipeline for microservices')
branchSources {
git {
id('microservice-template')
remote('https://github.com/company/microservice-template.git')
credentialsId('github-credentials')
}
}
factory {
workflowBranchProjectFactory {
scriptPath('Jenkinsfile')
}
}
triggers {
periodic(5)
}
orphanedItemStrategy {
discardOldItems {
numToKeep(10)
}
}
}
Seguridad y Compliance Enterprise
Configuración de Seguridad Avanzada
// security-config.groovy - Script para configuración de seguridad
import jenkins.model.*
import hudson.security.*
import jenkins.security.s2m.AdminWhitelistRule
import org.jenkinsci.plugins.scriptsecurity.scripts.*
def instance = Jenkins.getInstance()
// Configurar CSRF Protection
instance.setCrumbIssuer(new DefaultCrumbIssuer(true))
// Configurar Agent to Master Access Control
instance.getInjector().getInstance(AdminWhitelistRule.class).setMasterKillSwitch(false)
// Configurar Script Security
ScriptApproval scriptApproval = ScriptApproval.get()
// Pre-aprobar scripts comunes seguros
def approvedSignatures = [
'method groovy.json.JsonSlurperClassic parseText java.lang.String',
'method java.lang.String toLowerCase',
'method java.lang.String toUpperCase',
'method java.util.Date getTime',
'staticMethod System currentTimeMillis'
]
approvedSignatures.each { signature ->
scriptApproval.approveSignature(signature)
}
// Configurar Matrix Authorization Strategy
def strategy = new ProjectMatrixAuthorizationStrategy()
// Administradores completos
strategy.add(Jenkins.ADMINISTER, 'devops-admins')
// Desarrolladores - permisos limitados
strategy.add(Jenkins.READ, 'developers')
strategy.add(Item.BUILD, 'developers')
strategy.add(Item.READ, 'developers')
strategy.add(Item.WORKSPACE, 'developers')
// Senior developers - permisos de configuración
strategy.add(Item.CONFIGURE, 'senior-developers')
strategy.add(Item.CREATE, 'senior-developers')
strategy.add(Item.DELETE, 'senior-developers')
instance.setAuthorizationStrategy(strategy)
// Guardar configuración
instance.save()
println "Configuración de seguridad aplicada exitosamente"
Auditoría y Compliance
// audit-pipeline.groovy - Pipeline para auditoría de compliance
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: compliance-tools
image: your-registry/compliance-tools:latest
command:
- cat
tty: true
"""
}
}
triggers {
cron('H 2 * * 1') // Ejecutar semanalmente
}
stages {
stage('Security Compliance Audit') {
steps {
container('compliance-tools') {
script {
// Auditar configuración de Jenkins
def auditResults = [:]
// 1. Verificar que CSRF esté habilitado
def jenkins = Jenkins.getInstance()
auditResults['csrf_enabled'] = jenkins.getCrumbIssuer() != null
// 2. Verificar configuración de seguridad
def authStrategy = jenkins.getAuthorizationStrategy()
auditResults['auth_strategy'] = authStrategy.getClass().getSimpleName()
// 3. Auditar plugins instalados
def pluginManager = jenkins.getPluginManager()
def plugins = pluginManager.getPlugins()
def vulnerablePlugins = []
plugins.each { plugin ->
// Verificar contra base de datos de vulnerabilidades conocidas
if (checkPluginVulnerabilities(plugin.getShortName(), plugin.getVersion())) {
vulnerablePlugins.add([
name: plugin.getShortName(),
version: plugin.getVersion()
])
}
}
auditResults['vulnerable_plugins'] = vulnerablePlugins
// 4. Verificar configuración de scripts
def scriptApproval = ScriptApproval.get()
auditResults['pending_scripts'] = scriptApproval.getPendingScripts().size()
// Generar reporte
generateComplianceReport(auditResults)
// Fallar si hay vulnerabilidades críticas
if (vulnerablePlugins.size() > 0) {
error("Se encontraron plugins con vulnerabilidades conocidas")
}
}
}
}
}
stage('Access Control Audit') {
steps {
container('compliance-tools') {
script {
// Auditar permisos de usuarios
def jenkins = Jenkins.getInstance()
def authStrategy = jenkins.getAuthorizationStrategy()
if (authStrategy instanceof ProjectMatrixAuthorizationStrategy) {
def permissions = authStrategy.getAllPermissions()
def userPermissions = [:]
permissions.each { permission ->
def users = authStrategy.getGroups(permission)
users.each { user ->
if (!userPermissions.containsKey(user)) {
userPermissions[user] = []
}
userPermissions[user].add(permission.name)
}
}
// Verificar principio de menor privilegio
userPermissions.each { user, perms ->
if (perms.contains('Hudson.ADMINISTER') && perms.size() > 1) {
echo "WARNING: Usuario ${user} tiene permisos admin innecesarios"
}
}
// Generar reporte de permisos
writeFile file: 'permissions-report.json',
text: groovy.json.JsonOutput.toJson(userPermissions)
}
}
}
}
}
stage('Job Configuration Audit') {
steps {
container('compliance-tools') {
script {
// Auditar configuraciones de jobs
def jenkins = Jenkins.getInstance()
def jobs = jenkins.getAllItems(Job.class)
def jobAudit = []
jobs.each { job ->
def jobInfo = [
name: job.getFullName(),
lastBuild: job.getLastBuild()?.getNumber() ?: 'N/A',
configured: job.getConfigFile().exists(),
disabled: job.isDisabled()
]
// Verificar si el job usa scripts inline (riesgo de seguridad)
if (job.getConfigFile().exists()) {
def config = job.getConfigFile().asString()
if (config.contains('<script>') || config.contains('groovy.execute')) {
jobInfo['security_risk'] = 'Contains inline scripts'
}
}
jobAudit.add(jobInfo)
}
writeFile file: 'jobs-audit.json',
text: groovy.json.JsonOutput.toJson(jobAudit)
}
}
}
}
}
post {
always {
archiveArtifacts artifacts: '*.json', allowEmptyArchive: true
// Enviar reporte por email a equipo de compliance
emailext(
subject: "Jenkins Compliance Audit Report - ${new Date()}",
body: "Compliance audit completed. See attached reports.",
attachmentsPattern: '*.json',
to: '${COMPLIANCE_TEAM_EMAIL}'
)
}
}
}
def checkPluginVulnerabilities(pluginName, pluginVersion) {
// Implementar verificación contra base de datos de vulnerabilidades
// Por ejemplo, consultar Jenkins Security Advisory
return false // Placeholder
}
def generateComplianceReport(auditResults) {
def report = [
timestamp: new Date().toString(),
jenkins_version: Jenkins.VERSION,
audit_results: auditResults,
compliance_score: calculateComplianceScore(auditResults)
]
writeFile file: 'compliance-report.json',
text: groovy.json.JsonOutput.toJson(report)
}
def calculateComplianceScore(results) {
def score = 100
if (!results['csrf_enabled']) score -= 20
if (results['vulnerable_plugins'].size() > 0) score -= 30
if (results['pending_scripts'] > 5) score -= 10
return Math.max(score, 0)
}
Monitoreo y Observabilidad
Configuración de Métricas con Prometheus
# jenkins-monitoring.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: jenkins-prometheus-config
namespace: jenkins
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'jenkins'
static_configs:
- targets: ['jenkins:8080']
metrics_path: /prometheus/
scrape_interval: 5s
- job_name: 'jenkins-jobs'
static_configs:
- targets: ['jenkins:8080']
metrics_path: /prometheus/
scrape_interval: 30s
params:
job: ['.*']
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-prometheus
namespace: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins-prometheus
template:
metadata:
labels:
app: jenkins-prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
ports:
- containerPort: 9090
volumeMounts:
- name: prometheus-config
mountPath: /etc/prometheus
- name: prometheus-storage
mountPath: /prometheus
command:
- '/bin/prometheus'
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=30d'
- '--web.enable-lifecycle'
volumes:
- name: prometheus-config
configMap:
name: jenkins-prometheus-config
- name: prometheus-storage
emptyDir: {}
Dashboard de Grafana para Jenkins
{
"dashboard": {
"id": null,
"title": "Jenkins Performance Dashboard",
"tags": ["jenkins", "ci-cd", "devops"],
"timezone": "browser",
"panels": [
{
"id": 1,
"title": "Build Success Rate",
"type": "stat",
"targets": [
{
"expr": "rate(jenkins_builds_success_build_count[5m]) / rate(jenkins_builds_total_build_count[5m]) * 100",
"legendFormat": "Success Rate %"
}
],
"fieldConfig": {
"defaults": {
"color": {
"mode": "thresholds"
},
"thresholds": {
"steps": [
{"color": "red", "value": 0},
{"color": "yellow", "value": 80},
{"color": "green", "value": 95}
]
},
"unit": "percent"
}
},
"gridPos": {"h": 8, "w": 12, "x": 0, "y": 0}
},
{
"id": 2,
"title": "Average Build Duration",
"type": "stat",
"targets": [
{
"expr": "avg(jenkins_builds_duration_milliseconds_summary{quantile=\"0.5\"}) / 1000",
"legendFormat": "Median Duration"
}
],
"fieldConfig": {
"defaults": {
"unit": "s",
"color": {
"mode": "thresholds"
},
"thresholds": {
"steps": [
{"color": "green", "value": 0},
{"color": "yellow", "value": 300},
{"color": "red", "value": 600}
]
}
}
},
"gridPos": {"h": 8, "w": 12, "x": 12, "y": 0}
},
{
"id": 3,
"title": "Build Queue Length",
"type": "graph",
"targets": [
{
"expr": "jenkins_queue_size_value",
"legendFormat": "Queue Size"
}
],
"yAxes": [
{
"label": "Number of Jobs",
"min": 0
}
],
"gridPos": {"h": 8, "w": 24, "x": 0, "y": 8}
},
{
"id": 4,
"title": "Node Utilization",
"type": "graph",
"targets": [
{
"expr": "jenkins_node_online_value",
"legendFormat": "{{node}} - Online"
},
{
"expr": "jenkins_node_executors_value",
"legendFormat": "{{node}} - Total Executors"
},
{
"expr": "jenkins_node_executors_in_use_value",
"legendFormat": "{{node}} - In Use"
}
],
"gridPos": {"h": 8, "w": 24, "x": 0, "y": 16}
},
{
"id": 5,
"title": "Build Trends by Job",
"type": "graph",
"targets": [
{
"expr": "rate(jenkins_builds_total_build_count[1h])",
"legendFormat": "{{job}} - Builds/Hour"
}
],
"gridPos": {"h": 8, "w": 24, "x": 0, "y": 24}
}
],
"time": {
"from": "now-6h",
"to": "now"
},
"refresh": "30s"
}
}
Alerting Rules para Jenkins
# jenkins-alerts.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: jenkins-alerts
namespace: jenkins
data:
jenkins.rules: |
groups:
- name: jenkins
rules:
# Alert si la tasa de éxito baja del 90%
- alert: JenkinsBuildSuccessRateLow
expr: rate(jenkins_builds_success_build_count[10m]) / rate(jenkins_builds_total_build_count[10m]) * 100 90
for: 5m
labels:
severity: warning
service: jenkins
annotations:
summary: "Jenkins build success rate is low"
description: "Build success rate has been below 90% for more than 5 minutes. Current rate: {{ $value }}%"
# Alert si hay muchos jobs en cola
- alert: JenkinsQueueTooHigh
expr: jenkins_queue_size_value > 10
for: 2m
labels:
severity: warning
service: jenkins
annotations:
summary: "Jenkins build queue is too high"
description: "There are {{ $value }} jobs waiting in the queue"
# Alert si Jenkins está down
- alert: JenkinsDown
expr: up{job="jenkins"} == 0
for: 1m
labels:
severity: critical
service: jenkins
annotations:
summary: "Jenkins is down"
description: "Jenkins has been down for more than 1 minute"
# Alert si un nodo está offline
- alert: JenkinsNodeOffline
expr: jenkins_node_online_value == 0
for: 5m
labels:
severity: warning
service: jenkins
annotations:
summary: "Jenkins node is offline"
description: "Node {{ $labels.node }} has been offline for more than 5 minutes"
# Alert si los builds toman demasiado tiempo
- alert: JenkinsBuildDurationHigh
expr: avg(jenkins_builds_duration_milliseconds_summary{quantile="0.95"}) / 1000 > 1800
for: 10m
labels:
severity: warning
service: jenkins
annotations:
summary: "Jenkins builds are taking too long"
description: "95th percentile build duration is {{ $value }} seconds"
Escalabilidad y Performance
Configuración de Auto-Scaling
// auto-scaling-config.groovy
import org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud
import org.csanchez.jenkins.plugins.kubernetes.PodTemplate
import org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate
import jenkins.model.Jenkins
def jenkins = Jenkins.getInstance()
// Configurar cloud de Kubernetes con auto-scaling
def kubernetesCloud = new KubernetesCloud("kubernetes-auto-scale")
kubernetesCloud.setServerUrl("https://kubernetes.default.svc.cluster.local:443")
kubernetesCloud.setNamespace("jenkins-workers")
kubernetesCloud.setJenkinsUrl("http://jenkins:8080")
kubernetesCloud.setJenkinsTunnel("jenkins:50000")
kubernetesCloud.setContainerCapStr("100")
kubernetesCloud.setMaxRequestsPerHostStr("32")
kubernetesCloud.setRetentionTimeout(5)
// Template para builds ligeros
def lightweightTemplate = new PodTemplate()
lightweightTemplate.setName("lightweight-worker")
lightweightTemplate.setNamespace("jenkins-workers")
lightweightTemplate.setLabel("lightweight")
lightweightTemplate.setNodeUsageMode(Node.Mode.NORMAL)
lightweightTemplate.setIdleMinutes(5)
def lightweightContainer = new ContainerTemplate()
lightweightContainer.setName("worker")
lightweightContainer.setImage("jenkins/inbound-agent:latest")
lightweightContainer.setAlwaysPullImage(false)
lightweightContainer.setResourceRequestMemory("256Mi")
lightweightContainer.setResourceLimitMemory("512Mi")
lightweightContainer.setResourceRequestCpu("250m")
lightweightContainer.setResourceLimitCpu("500m")
lightweightTemplate.getContainers().add(lightweightContainer)
// Template para builds pesados
def heavyTemplate = new PodTemplate()
heavyTemplate.setName("heavy-worker")
heavyTemplate.setNamespace("jenkins-workers")
heavyTemplate.setLabel("heavy-compute")
heavyTemplate.setNodeUsageMode(Node.Mode.EXCLUSIVE)
heavyTemplate.setIdleMinutes(2)
def heavyContainer = new ContainerTemplate()
heavyContainer.setName("worker")
heavyContainer.setImage("jenkins/inbound-agent:latest")
heavyContainer.setAlwaysPullImage(false)
heavyContainer.setResourceRequestMemory("2Gi")
heavyContainer.setResourceLimitMemory("4Gi")
heavyContainer.setResourceRequestCpu("1")
heavyContainer.setResourceLimitCpu("2")
heavyTemplate.getContainers().add(heavyContainer)
// Agregar templates al cloud
kubernetesCloud.addTemplate(lightweightTemplate)
kubernetesCloud.addTemplate(heavyTemplate)
// Agregar cloud a Jenkins
jenkins.clouds.add(kubernetesCloud)
jenkins.save()
println "Auto-scaling configuration applied successfully"
Optimización de Performance
// performance-optimization.groovy
import jenkins.model.Jenkins
import hudson.model.LoadStatistics
import java.util.logging.Logger
def logger = Logger.getLogger("performance-optimization")
def jenkins = Jenkins.getInstance()
// Configurar parámetros JVM para performance
System.setProperty("hudson.model.LoadStatistics.clock", "1000")
System.setProperty("hudson.model.LoadStatistics.decay", "0.9")
// Optimizar configuración de executors
def optimalExecutors = Runtime.getRuntime().availableProcessors()
jenkins.setNumExecutors(0) // Solo usar agents, no master
// Configurar limpieza automática de workspace
jenkins.getDescriptorByType(hudson.plugins.ws_cleanup.WsCleanup.DescriptorImpl.class).with {
setDisableDeferredWipeout(false)
setDeferredWipeoutMinutes(60)
}
// Configurar limpieza de builds antiguos
jenkins.getAllItems(Job.class).each { job ->
if (job.getBuildDiscarder() == null) {
job.setBuildDiscarder(new LogRotator(
-1, // daysToKeepStr
30, // numToKeepStr
-1, // artifactDaysToKeepStr
10 // artifactNumToKeepStr
))
job.save()
}
}
// Configurar compresión de logs
System.setProperty("hudson.model.Run.ArtifactList.treeCutoff", "40")
System.setProperty("jenkins.util.groovy.GroovyHookScript.ROOT_PATH", "/tmp")
// Optimizar garbage collection
if (System.getProperty("jenkins.slaves.DefaultJnlpSlaveReceiver.connectionTimeout") == null) {
System.setProperty("jenkins.slaves.DefaultJnlpSlaveReceiver.connectionTimeout", "60000")
}
logger.info("Performance optimizations applied")
// Generar reporte de performance
def stats = jenkins.getOverallLoad()
println "Current load statistics:"
println " Total executors: ${jenkins.getComputers().sum { it.getNumExecutors() }}"
println " Busy executors: ${jenkins.getComputers().sum { it.countBusy() }}"
println " Queue length: ${jenkins.getQueue().getItems().length}"
println " Load: ${stats.getLatestLoadStatistics()}"
Integración con Herramientas DevOps
Pipeline con Terraform
// terraform-pipeline.groovy
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: terraform
image: hashicorp/terraform:1.3
command:
- cat
tty: true
- name: aws-cli
image: amazon/aws-cli:latest
command:
- cat
tty: true
"""
}
}
parameters {
choice(
name: 'ENVIRONMENT',
choices: ['dev', 'staging', 'prod'],
description: 'Target environment'
)
choice(
name: 'ACTION',
choices: ['plan', 'apply', 'destroy'],
description: 'Terraform action to perform'
)
booleanParam(
name: 'AUTO_APPROVE',
defaultValue: false,
description: 'Auto approve terraform apply'
)
}
environment {
TF_VAR_environment = "${params.ENVIRONMENT}"
AWS_DEFAULT_REGION = 'us-west-2'
TF_IN_AUTOMATION = 'true'
TF_INPUT = 'false'
}
stages {
stage('Checkout') {
steps {
checkout scm
container('terraform') {
sh 'terraform version'
}
}
}
stage('Terraform Init') {
steps {
container('terraform') {
sh '''
cd terraform/environments/${ENVIRONMENT}
terraform init \
-backend-config="bucket=company-terraform-state" \
-backend-config="key=infrastructure/${ENVIRONMENT}/terraform.tfstate" \
-backend-config="region=us-west-2"
'''
}
}
}
stage('Terraform Plan') {
steps {
container('terraform') {
sh '''
cd terraform/environments/${ENVIRONMENT}
terraform plan \
-var-file="terraform.tfvars" \
-out=tfplan \
-detailed-exitcode
'''
}
script {
// Analizar el plan y determinar cambios
def planOutput = sh(
script: 'cd terraform/environments/${ENVIRONMENT} && terraform show -json tfplan',
returnStdout: true
).trim()
def planData = readJSON text: planOutput
def resourceChanges = planData.resource_changes ?: []
def toAdd = resourceChanges.findAll { it.change.actions.contains('create') }.size()
def toChange = resourceChanges.findAll { it.change.actions.contains('update') }.size()
def toDestroy = resourceChanges.findAll { it.change.actions.contains('delete') }.size()
env.TERRAFORM_CHANGES = "Add: ${toAdd}, Change: ${toChange}, Destroy: ${toDestroy}"
if (toDestroy > 0 && params.ENVIRONMENT == 'prod') {
error("Destructive changes detected in production environment")
}
}
}
}
stage('Terraform Apply') {
when {
anyOf {
expression { params.ACTION == 'apply' }
expression { params.ACTION == 'destroy' }
}
}
steps {
script {
def needsApproval = !params.AUTO_APPROVE || params.ENVIRONMENT == 'prod'
if (needsApproval) {
// Enviar notificación a Slack para aprobación
slackSend(
channel: '#infrastructure',
color: '#FFAA00',
message: """
:warning: Terraform ${params.ACTION} approval needed
Environment: ${params.ENVIRONMENT}
Changes: ${env.TERRAFORM_CHANGES}
Job: ${env.BUILD_URL}
"""
)
timeout(time: 30, unit: 'MINUTES') {
input message: """
Approve Terraform ${params.ACTION} for ${params.ENVIRONMENT}?
Changes: ${env.TERRAFORM_CHANGES}
""", submitter: 'devops-leads,platform-team'
}
}
}
container('terraform') {
script {
if (params.ACTION == 'apply') {
sh '''
cd terraform/environments/${ENVIRONMENT}
terraform apply tfplan
'''
} else if (params.ACTION == 'destroy') {
sh '''
cd terraform/environments/${ENVIRONMENT}
terraform destroy -auto-approve -var-file="terraform.tfvars"
'''
}
}
}
}
}
stage('Infrastructure Validation') {
when {
expression { params.ACTION == 'apply' }
}
parallel {
stage('AWS Resource Validation') {
steps {
container('aws-cli') {
sh '''
# Validar recursos críticos
aws ec2 describe-instances --filters "Name=tag:Environment,Values=${ENVIRONMENT}" --query 'Reservations[].Instances[].State.Name'
aws rds describe-db-instances --query 'DBInstances[?contains(DBInstanceIdentifier, `${ENVIRONMENT}`)].DBInstanceStatus'
'''
}
}
}
stage('Connectivity Tests') {
steps {
container('terraform') {
sh '''
cd terraform/environments/${ENVIRONMENT}
# Obtener outputs de Terraform
LOAD_BALANCER_URL=$(terraform output -raw load_balancer_url)
DATABASE_ENDPOINT=$(terraform output -raw database_endpoint)
# Test de conectividad
curl -f --max-time 30 "${LOAD_BALANCER_URL}/health" || exit 1
# Test de base de datos (si es accesible)
if [ ! -z "${DATABASE_ENDPOINT}" ]; then
timeout 10 bash -c "</dev/tcp/${DATABASE_ENDPOINT}/5432" || echo "Database connectivity check failed"
fi
'''
}
}
}
}
}
}
post {
always {
container('terraform') {
// Archivar plan para referencia futura
archiveArtifacts artifacts: 'terraform/environments/${ENVIRONMENT}/tfplan',
allowEmptyArchive: true
}
}
success {
slackSend(
channel: '#infrastructure',
color: '#00FF00',
message: """
:white_check_mark: Terraform ${params.ACTION} completed successfully
Environment: ${params.ENVIRONMENT}
Changes: ${env.TERRAFORM_CHANGES}
"""
)
}
failure {
slackSend(
channel: '#infrastructure',
color: '#FF0000',
message: """
:x: Terraform ${params.ACTION} failed
Environment: ${params.ENVIRONMENT}
Job: ${env.BUILD_URL}
"""
)
}
}
}
Mejores Prácticas y Governance
Shared Libraries para Reutilización
// vars/standardPipeline.groovy - Shared Library
def call(Map config) {
pipeline {
agent none
options {
buildDiscarder(logRotator(
numToKeepStr: config.buildRetention ?: '10',
daysToKeepStr: '30'
))
timeout(time: config.timeoutMinutes ?: 30, unit: 'MINUTES')
skipStagesAfterUnstable()
}
environment {
BUILD_VERSION = "${env.BUILD_NUMBER}-${env.GIT_COMMIT.take(7)}"
REGISTRY_URL = config.registryUrl ?: 'your-registry.com'
APP_NAME = config.appName
}
stages {
stage('Setup') {
agent {
kubernetes {
yaml libraryResource('pod-templates/build-pod.yaml')
}
}
steps {
setupStage(config)
}
}
stage('Test') {
parallel {
stage('Unit Tests') {
agent {
kubernetes {
yaml libraryResource("pod-templates/${config.testRunner}-pod.yaml")
}
}
steps {
runTests(config, 'unit')
}
post {
always {
publishTestResults testResultsPattern: config.testResultsPattern ?: 'test-results.xml'
}
}
}
stage('Security Scan') {
agent {
kubernetes {
yaml libraryResource('pod-templates/security-pod.yaml')
}
}
steps {
runSecurityScan(config)
}
}
}
}
stage('Build') {
agent {
kubernetes {
yaml libraryResource('pod-templates/docker-pod.yaml')
}
}
steps {
buildAndPushImage(config)
}
}
stage('Deploy') {
when {
anyOf {
branch 'main'
branch 'develop'
}
}
agent {
kubernetes {
yaml libraryResource('pod-templates/deploy-pod.yaml')
}
}
steps {
deployApplication(config)
}
}
}
post {
always {
cleanWs()
}
success {
sendNotification(config, 'success')
}
failure {
sendNotification(config, 'failure')
}
}
}
}
// vars/setupStage.groovy
def call(Map config) {
script {
echo "Setting up pipeline for ${config.appName}"
// Validar configuración requerida
def requiredFields = ['appName', 'testRunner']
requiredFields.each { field ->
if (!config[field]) {
error("Required configuration field missing: ${field}")
}
}
// Determinar entorno de deployment
env.DEPLOY_ENVIRONMENT = env.BRANCH_NAME == 'main' ? 'production' :
env.BRANCH_NAME == 'develop' ? 'staging' : 'development'
echo "Deployment environment: ${env.DEPLOY_ENVIRONMENT}"
}
}
// vars/runTests.groovy
def call(Map config, String testType) {
script {
switch(config.testRunner) {
case 'npm':
sh "npm ci"
sh "npm run test:${testType}"
break
case 'maven':
sh "mvn test -Dtest.type=${testType}"
break
case 'gradle':
sh "./gradlew ${testType}Test"
break
case 'pytest':
sh "pip install -r requirements.txt"
sh "pytest tests/${testType}/"
break
default:
error("Unsupported test runner: ${config.testRunner}")
}
}
}
// vars/buildAndPushImage.groovy
def call(Map config) {
script {
def imageTag = "${env.REGISTRY_URL}/${config.appName}:${env.BUILD_VERSION}"
def latestTag = "${env.REGISTRY_URL}/${config.appName}:latest"
// Build image
sh """
docker build \
--build-arg BUILD_DATE=\$(date -u +"%Y-%m-%dT%H:%M:%SZ") \
--build-arg VCS_REF=${env.GIT_COMMIT} \
--build-arg VERSION=${env.BUILD_VERSION} \
-t ${imageTag} \
-t ${latestTag} \
${config.dockerContext ?: '.'}
"""
// Security scan
if (config.enableSecurityScan != false) {
sh "trivy image --format json --output trivy-report.json ${imageTag}"
archiveArtifacts artifacts: 'trivy-report.json'
}
// Push image
sh "docker push ${imageTag}"
if (env.BRANCH_NAME == 'main') {
sh "docker push ${latestTag}"
}
}
}
Governance y Compliance
// governance-policies.groovy
@Library('company-shared-library') _
import groovy.json.JsonBuilder
import java.util.regex.Pattern
class PipelineGovernance {
static void validatePipelineCompliance(script, Map config) {
def violations = []
// Regla: Todos los pipelines deben tener tests
if (!config.enableTests) {
violations.add("Tests are mandatory for all pipelines")
}
// Regla: Pipelines de producción deben tener security scan
if (script.env.BRANCH_NAME == 'main' && !config.enableSecurityScan) {
violations.add("Security scanning is mandatory for production pipelines")
}
// Regla: Validar naming conventions
if (!Pattern.matches('^[a-z0-9-]+$', config.appName)) {
violations.add("Application name must follow naming convention: lowercase, numbers, and hyphens only")
}
// Regla: Pipelines críticos deben tener notification
if (config.criticality == 'high' && !config.notifications) {
violations.add("High criticality applications must have notifications configured")
}
// Regla: Verificar que credentials son seguras
validateCredentials(script, config, violations)
if (violations.size() > 0) {
def violationReport = new JsonBuilder([
timestamp: new Date().toString(),
pipeline: script.env.JOB_NAME,
branch: script.env.BRANCH_NAME,
violations: violations
]).toPrettyString()
script.writeFile file: 'governance-violations.json', text: violationReport
script.archiveArtifacts artifacts: 'governance-violations.json'
// Enviar reporte a equipo de governance
script.emailext(
subject: "Pipeline Governance Violations - ${script.env.JOB_NAME}",
body: "Governance violations found. See attached report.",
attachmentsPattern: 'governance-violations.json',
to: '${GOVERNANCE_TEAM_EMAIL}'
)
script.error("Pipeline governance violations found: ${violations.join(', ')}")
}
}
static void validateCredentials(script, Map config, List violations) {
// Verificar que no hay hardcoded credentials
def suspiciousPatterns = [
/password\s*[:=]\s*['"][^'"]+['"]/,
/api[_-]?key\s*[:=]\s*['"][^'"]+['"]/,
/secret\s*[:=]\s*['"][^'"]+['"]/,
/token\s*[:=]\s*['"][^'"]+['"]/
]
def workspace = script.env.WORKSPACE
suspiciousPatterns.each { pattern ->
def result = script.sh(
script: "grep -r -E '${pattern}' ${workspace} --exclude-dir=.git || true",
returnStdout: true
).trim()
if (result) {
violations.add("Potential hardcoded credentials found in source code")
}
}
}
static void enforceResourceLimits(script, Map config) {
def maxCpu = config.maxCpu ?: '2'
def maxMemory = config.maxMemory ?: '4Gi'
def maxTimeout = config.maxTimeout ?: 60
// Validar que los recursos no excedan límites
if (config.resources?.limits?.cpu &&
Integer.parseInt(config.resources.limits.cpu) > Integer.parseInt(maxCpu)) {
script.error("CPU limit exceeds maximum allowed: ${maxCpu}")
}
if (config.resources?.limits?.memory &&
parseMemory(config.resources.limits.memory) > parseMemory(maxMemory)) {
script.error("Memory limit exceeds maximum allowed: ${maxMemory}")
}
if (config.timeoutMinutes && config.timeoutMinutes > maxTimeout) {
script.error("Timeout exceeds maximum allowed: ${maxTimeout} minutes")
}
}
static long parseMemory(String memory) {
def matcher = memory =~ /(\d+)([GMK]?i?)/
if (matcher.matches()) {
def value = Long.parseLong(matcher.group(1))
def unit = matcher.group(2)
switch (unit) {
case 'Gi': return value * 1024 * 1024 * 1024
case 'Mi': return value * 1024 * 1024
case 'Ki': return value * 1024
case 'G': return value * 1000 * 1000 * 1000
case 'M': return value * 1000 * 1000
case 'K': return value * 1000
default: return value
}
}
return 0
}
static void generateComplianceReport(script, Map config) {
def report = [
timestamp: new Date().toString(),
pipeline: script.env.JOB_NAME,
branch: script.env.BRANCH_NAME,
build_number: script.env.BUILD_NUMBER,
compliance_checks: [
security_scan: config.enableSecurityScan ?: false,
tests_enabled: config.enableTests ?: false,
resource_limits_defined: config.resources != null,
notifications_configured: config.notifications != null,
naming_compliant: Pattern.matches('^[a-z0-9-]+$', config.appName)
],
tools_used: [
static_analysis: config.sonarqube?.enabled ?: false,
dependency_scan: config.dependencyCheck?.enabled ?: false,
container_scan: config.enableSecurityScan ?: false
]
]
def complianceScore = calculateComplianceScore(report.compliance_checks)
report.compliance_score = complianceScore
def reportJson = new JsonBuilder(report).toPrettyString()
script.writeFile file: 'compliance-report.json', text: reportJson
script.archiveArtifacts artifacts: 'compliance-report.json'
// Enviar métricas a sistema de monitoreo
script.sh """
curl -X POST http://metrics-collector:8080/compliance \
-H "Content-Type: application/json" \
-d '${reportJson}' || true
"""
}
static int calculateComplianceScore(Map checks) {
def totalChecks = checks.size()
def passedChecks = checks.values().count { it == true }
return (int) ((passedChecks / totalChecks) * 100)
}
}
// Uso en pipeline
def call(Map config) {
// Validar compliance antes de ejecutar pipeline
PipelineGovernance.validatePipelineCompliance(this, config)
PipelineGovernance.enforceResourceLimits(this, config)
// Ejecutar pipeline estándar
standardPipeline(config)
// Generar reporte de compliance
PipelineGovernance.generateComplianceReport(this, config)
}
Conclusión
Jenkins continúa siendo una plataforma fundamental en el ecosistema DevOps moderno, demostrando su capacidad de adaptación y evolución constante. A través de esta guía completa, hemos explorado desde implementaciones básicas hasta configuraciones enterprise avanzadas que permiten escalar Jenkins para organizaciones de cualquier tamaño.
Beneficios Clave de una Implementación Jenkins Robusta
Automatización Completa del Ciclo de Vida:
- Integración continua con validación automática de código
- Deployments automatizados con múltiples estrategias
- Testing automatizado en múltiples dimensiones (unit, integration, security, performance)
- Rollbacks inteligentes y gestión de versiones
Escalabilidad y Performance Empresarial:
- Arquitectura distribuida con auto-scaling en Kubernetes
- Optimización de recursos mediante pod templates especializados
- Gestión eficiente de cola de builds y balanceeo de carga
- Monitoreo proactivo con alertas automatizadas
Seguridad y Compliance Integrados:
- Configuration as Code para trazabilidad completa
- Escaneo automatizado de vulnerabilidades en código e imágenes
- Políticas de governance automatizadas
- Auditoría continua y reportes de compliance
Factores Críticos de Éxito
- Adopción Gradual: Comenzar con implementaciones simples y evolucionar incrementalmente
- Cultura DevOps: Alinear la implementación técnica con cambios culturales organizacionales
- Monitoreo Continuo: Establecer observabilidad desde el día uno para optimización constante
- Governance Proactiva: Implementar políticas y controles desde las fases tempranas
- Formación del Equipo: Invertir en capacitación continua del equipo técnico
El Futuro de Jenkins en DevOps
Jenkins continúa evolucionando para mantenerse relevante en el panorama tecnológico cambiante:
- Cloud-Native First: Mayor integración con Kubernetes y servicios cloud
- Security by Design: Incorporación nativa de capacidades de seguridad avanzadas
- AI/ML Integration: Automatización inteligente de decisiones de pipeline
- Developer Experience: Mejoras continuas en usabilidad y experiencia de usuario
- Ecosystem Expansion: Crecimiento del ecosistema de plugins y integraciones
La implementación exitosa de Jenkins requiere una aproximación holística que combine excelencia técnica, mejores prácticas operacionales y alineación estratégica organizacional. Las organizaciones que dominan estas dimensiones obtienen ventajas competitivas significativas en velocidad de entrega, calidad de software y capacidad de innovación.
Jenkins no es solo una herramienta de CI/CD; es una plataforma que habilita la transformación digital completa, permitiendo a las organizaciones responder ágilmente a las demandas del mercado mientras mantienen estándares elevados de calidad y seguridad.
Recursos Adicionales
Documentación y Referencias Oficiales
- Jenkins Official Documentation - Documentación completa y actualizada
- Pipeline Syntax Reference - Guía completa de sintaxis de Jenkinsfile
- Plugin Index - Catálogo completo de plugins disponibles
- Configuration as Code Documentation - Guía oficial de JCasC
Herramientas y Recursos de la Comunidad
- Jenkins X - Solución cloud-native basada en Jenkins para Kubernetes
- Blue Ocean - Interfaz moderna y user-friendly para Jenkins
- Jenkins Operator - Operador de Kubernetes para gestión declarativa
- Shared Libraries Examples - Ejemplos y mejores prácticas
Comunidad y Soporte
- Jenkins Community - Foros, eventos y contribución a la comunidad
- CloudBees - Soporte comercial y enterprise features
- Jenkins User Conferences - Eventos y conferencias de la comunidad
- Awesome Jenkins - Lista curada de recursos Jenkins