AWS Lambda: Dominando la Computación Serverless en AWS

AWS Lambda ha revolucionado el mundo del desarrollo de aplicaciones al introducir el paradigma serverless, permitiendo a los desarrolladores ejecutar código sin gestionar servidores. Esta guía completa te llevará desde los fundamentos hasta implementaciones avanzadas, con ejemplos reales y mejores prácticas para producción.

Fundamentos de AWS Lambda y Serverless

¿Qué es AWS Lambda?

AWS Lambda es un servicio de computación serverless que ejecuta tu código en respuesta a eventos sin necesidad de aprovisionar o gestionar servidores. Lanzado en 2014, Lambda marcó el inicio de la era serverless en la nube.

Características principales:

  • Event-driven: Se ejecuta en respuesta a eventos
  • Stateless: Cada invocación es independiente
  • Auto-scaling: Escala automáticamente desde 0 hasta 1000+ ejecuciones concurrentes
  • Pay-per-use: Solo pagas por el tiempo de computación utilizado
  • Managed runtime: AWS gestiona la infraestructura subyacente

Evolución del Paradigma Serverless

timeline
    title Evolución hacia Serverless
    
    section Servidores Físicos (1990s-2000s)
        Hardware dedicado : Alto costo inicial
                         : Baja utilización
                         : Gestión compleja
    
    section Virtualización (2000s-2010)
        Máquinas virtuales : Mejor utilización
                          : Flexibilidad mejorada
                          : Aún requiere gestión
    
    section Contenedores (2010-2015)
        Docker & Kubernetes : Portabilidad
                            : Microservicios
                            : Orquestación compleja
    
    section Serverless (2014-presente)
        AWS Lambda : Sin gestión de servidores
                   : Event-driven
                   : Pago por uso

Arquitectura Interna de Lambda

Lambda utiliza una arquitectura basada en microVMs con Firecracker:

FireEYcRxorutuanerctnkisCemioAreodWneSVsMLambdaConSSMItceoAracnMoluilirtnioPgtrlyiannge

Configuración y Desarrollo

Creación de una Función Lambda

Ejemplo 1: Función Básica Node.js

// index.js - Procesador de eventos básico
exports.handler = async (event, context) => {
    console.log('Event received:', JSON.stringify(event, null, 2));
    console.log('Context:', JSON.stringify(context, null, 2));
    
    // Extraer información del contexto
    const {
        functionName,
        functionVersion,
        memoryLimitInMB,
        getRemainingTimeInMillis
    } = context;
    
    try {
        // Lógica de procesamiento
        const result = await processEvent(event);
        
        return {
            statusCode: 200,
            headers: {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Origin': '*'
            },
            body: JSON.stringify({
                message: 'Success',
                functionName,
                functionVersion,
                memoryLimit: memoryLimitInMB,
                remainingTime: getRemainingTimeInMillis(),
                result
            })
        };
    } catch (error) {
        console.error('Error processing event:', error);
        
        return {
            statusCode: 500,
            body: JSON.stringify({
                message: 'Internal Server Error',
                error: error.message
            })
        };
    }
};

async function processEvent(event) {
    // Simular procesamiento asíncrono
    await new Promise(resolve => setTimeout(resolve, 100));
    
    return {
        timestamp: new Date().toISOString(),
        eventSource: event.source || 'unknown',
        processedAt: Date.now()
    };
}

Ejemplo 2: API REST con Lambda y API Gateway

// api-handler.js - API REST completa
const AWS = require('aws-sdk');
const dynamodb = new AWS.DynamoDB.DocumentClient();

const TABLE_NAME = process.env.TABLE_NAME;

exports.handler = async (event) => {
    const { httpMethod, path, pathParameters, body } = event;
    
    console.log(`${httpMethod} ${path}`);
    
    try {
        let response;
        
        switch (httpMethod) {
            case 'GET':
                if (pathParameters && pathParameters.id) {
                    response = await getItem(pathParameters.id);
                } else {
                    response = await listItems();
                }
                break;
                
            case 'POST':
                response = await createItem(JSON.parse(body));
                break;
                
            case 'PUT':
                response = await updateItem(pathParameters.id, JSON.parse(body));
                break;
                
            case 'DELETE':
                response = await deleteItem(pathParameters.id);
                break;
                
            default:
                return createResponse(405, { error: 'Method not allowed' });
        }
        
        return createResponse(200, response);
    } catch (error) {
        console.error('API Error:', error);
        return createResponse(500, { error: 'Internal server error' });
    }
};

// Operaciones CRUD
async function getItem(id) {
    const params = {
        TableName: TABLE_NAME,
        Key: { id }
    };
    
    const result = await dynamodb.get(params).promise();
    
    if (!result.Item) {
        throw new Error('Item not found');
    }
    
    return result.Item;
}

async function listItems() {
    const params = {
        TableName: TABLE_NAME,
        Limit: 100
    };
    
    const result = await dynamodb.scan(params).promise();
    return {
        items: result.Items,
        count: result.Count
    };
}

async function createItem(item) {
    const newItem = {
        ...item,
        id: generateId(),
        createdAt: new Date().toISOString(),
        updatedAt: new Date().toISOString()
    };
    
    const params = {
        TableName: TABLE_NAME,
        Item: newItem,
        ConditionExpression: 'attribute_not_exists(id)'
    };
    
    await dynamodb.put(params).promise();
    return newItem;
}

async function updateItem(id, updates) {
    const params = {
        TableName: TABLE_NAME,
        Key: { id },
        UpdateExpression: 'SET #updatedAt = :updatedAt',
        ExpressionAttributeNames: {
            '#updatedAt': 'updatedAt'
        },
        ExpressionAttributeValues: {
            ':updatedAt': new Date().toISOString()
        },
        ReturnValues: 'ALL_NEW'
    };
    
    // Construir expresión de actualización dinámicamente
    Object.keys(updates).forEach((key, index) => {
        if (key !== 'id') {
            params.UpdateExpression += `, #${key} = :${key}`;
            params.ExpressionAttributeNames[`#${key}`] = key;
            params.ExpressionAttributeValues[`:${key}`] = updates[key];
        }
    });
    
    const result = await dynamodb.update(params).promise();
    return result.Attributes;
}

async function deleteItem(id) {
    const params = {
        TableName: TABLE_NAME,
        Key: { id },
        ReturnValues: 'ALL_OLD'
    };
    
    const result = await dynamodb.delete(params).promise();
    
    if (!result.Attributes) {
        throw new Error('Item not found');
    }
    
    return { message: 'Item deleted successfully' };
}

function createResponse(statusCode, body) {
    return {
        statusCode,
        headers: {
            'Content-Type': 'application/json',
            'Access-Control-Allow-Origin': '*',
            'Access-Control-Allow-Headers': 'Content-Type,Authorization',
            'Access-Control-Allow-Methods': 'GET,POST,PUT,DELETE,OPTIONS'
        },
        body: JSON.stringify(body)
    };
}

function generateId() {
    return Date.now().toString(36) + Math.random().toString(36).substr(2);
}

Ejemplo 3: Procesamiento de Archivos con S3

# s3_processor.py - Procesamiento de archivos S3
import json
import boto3
import os
from PIL import Image
import io
import logging

# Configurar logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)

s3_client = boto3.client('s3')

def lambda_handler(event, context):
    """
    Procesa imágenes subidas a S3: redimensiona y genera thumbnails
    """
    
    for record in event['Records']:
        # Extraer información del evento S3
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        
        logger.info(f"Processing {key} from bucket {bucket}")
        
        try:
            # Validar que es una imagen
            if not is_image_file(key):
                logger.info(f"Skipping non-image file: {key}")
                continue
            
            # Descargar imagen original
            image_content = download_image(bucket, key)
            
            # Procesar imagen
            processed_images = process_image(image_content, key)
            
            # Subir imágenes procesadas
            for size_name, image_data in processed_images.items():
                upload_processed_image(bucket, key, size_name, image_data)
            
            # Actualizar metadata en DynamoDB
            update_image_metadata(key, processed_images.keys())
            
            logger.info(f"Successfully processed {key}")
            
        except Exception as e:
            logger.error(f"Error processing {key}: {str(e)}")
            raise

def is_image_file(key):
    """Verificar si el archivo es una imagen válida"""
    image_extensions = {'.jpg', '.jpeg', '.png', '.gif', '.webp'}
    return any(key.lower().endswith(ext) for ext in image_extensions)

def download_image(bucket, key):
    """Descargar imagen desde S3"""
    try:
        response = s3_client.get_object(Bucket=bucket, Key=key)
        return response['Body'].read()
    except Exception as e:
        logger.error(f"Error downloading {key}: {str(e)}")
        raise

def process_image(image_content, original_key):
    """Procesar imagen: crear múltiples tamaños"""
    
    # Configuración de tamaños
    sizes = {
        'thumbnail': (150, 150),
        'small': (400, 400),
        'medium': (800, 600),
        'large': (1200, 900)
    }
    
    processed_images = {}
    
    try:
        # Abrir imagen con PIL
        with Image.open(io.BytesIO(image_content)) as img:
            # Convertir a RGB si es necesario
            if img.mode in ('RGBA', 'LA', 'P'):
                img = img.convert('RGB')
            
            original_width, original_height = img.size
            logger.info(f"Original size: {original_width}x{original_height}")
            
            for size_name, (max_width, max_height) in sizes.items():
                # Calcular nuevo tamaño manteniendo aspecto
                width, height = calculate_resize_dimensions(
                    original_width, original_height, max_width, max_height
                )
                
                # Redimensionar imagen
                resized_img = img.resize((width, height), Image.Resampling.LANCZOS)
                
                # Guardar en buffer
                output_buffer = io.BytesIO()
                resized_img.save(output_buffer, format='JPEG', quality=85, optimize=True)
                processed_images[size_name] = output_buffer.getvalue()
                
                logger.info(f"Created {size_name}: {width}x{height}")
    
    except Exception as e:
        logger.error(f"Error processing image: {str(e)}")
        raise
    
    return processed_images

def calculate_resize_dimensions(orig_width, orig_height, max_width, max_height):
    """Calcular dimensiones manteniendo aspecto"""
    ratio = min(max_width / orig_width, max_height / orig_height)
    return int(orig_width * ratio), int(orig_height * ratio)

def upload_processed_image(bucket, original_key, size_name, image_data):
    """Subir imagen procesada a S3"""
    
    # Construir clave para imagen procesada
    base_name = os.path.splitext(original_key)[0]
    processed_key = f"processed/{size_name}/{base_name}.jpg"
    
    try:
        s3_client.put_object(
            Bucket=bucket,
            Key=processed_key,
            Body=image_data,
            ContentType='image/jpeg',
            Metadata={
                'processed-size': size_name,
                'original-file': original_key,
                'processed-at': context.aws_request_id if 'context' in globals() else 'unknown'
            }
        )
        logger.info(f"Uploaded {processed_key}")
    
    except Exception as e:
        logger.error(f"Error uploading {processed_key}: {str(e)}")
        raise

def update_image_metadata(original_key, processed_sizes):
    """Actualizar metadata en DynamoDB"""
    
    if not os.environ.get('METADATA_TABLE'):
        logger.warning("METADATA_TABLE not configured, skipping metadata update")
        return
    
    dynamodb = boto3.resource('dynamodb')
    table = dynamodb.Table(os.environ['METADATA_TABLE'])
    
    try:
        table.put_item(
            Item={
                'image_key': original_key,
                'processed_sizes': list(processed_sizes),
                'processed_at': context.aws_request_id if 'context' in globals() else 'unknown',
                'status': 'completed'
            }
        )
        logger.info(f"Updated metadata for {original_key}")
    
    except Exception as e:
        logger.error(f"Error updating metadata: {str(e)}")
        # No re-raise para no fallar todo el procesamiento por un error de metadata

Configuración con Infrastructure as Code

AWS SAM Template

# template.yaml - SAM template completo
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: 'Complete serverless application with Lambda, API Gateway, and DynamoDB'

Globals:
  Function:
    Timeout: 30
    Runtime: nodejs18.x
    Architectures:
      - arm64  # Usar Graviton para mejor rendimiento/precio
    Environment:
      Variables:
        POWERTOOLS_SERVICE_NAME: myapp
        POWERTOOLS_METRICS_NAMESPACE: MyApp

Parameters:
  Environment:
    Type: String
    Default: dev
    AllowedValues: [dev, test, prod]
    Description: Environment name
  
  LogLevel:
    Type: String
    Default: INFO
    AllowedValues: [DEBUG, INFO, WARN, ERROR]
    Description: Log level for Lambda functions

Resources:
  # DynamoDB Table
  ItemsTable:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName: !Sub '${Environment}-items-table'
      BillingMode: PAY_PER_REQUEST
      AttributeDefinitions:
        - AttributeName: id
          AttributeType: S
      KeySchema:
        - AttributeName: id
          KeyType: HASH
      StreamSpecification:
        StreamViewType: NEW_AND_OLD_IMAGES
      PointInTimeRecoverySpecification:
        PointInTimeRecoveryEnabled: true
      Tags:
        - Key: Environment
          Value: !Ref Environment

  # S3 Bucket for file uploads
  FilesBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub '${Environment}-files-${AWS::AccountId}'
      NotificationConfiguration:
        LambdaConfigurations:
          - Event: s3:ObjectCreated:*
            Function: !GetAtt FileProcessorFunction.Arn
            Filter:
              S3Key:
                Rules:
                  - Name: prefix
                    Value: uploads/
                  - Name: suffix
                    Value: .jpg
      CorsConfiguration:
        CorsRules:
          - AllowedHeaders: ['*']
            AllowedMethods: [GET, PUT, POST]
            AllowedOrigins: ['*']
            MaxAge: 3000

  # API Gateway
  ApiGateway:
    Type: AWS::Serverless::Api
    Properties:
      StageName: !Ref Environment
      Cors:
        AllowMethods: "'DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT'"
        AllowHeaders: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
        AllowOrigin: "'*'"
      Auth:
        DefaultAuthorizer: CognitoAuthorizer
        Authorizers:
          CognitoAuthorizer:
            UserPoolArn: !GetAtt UserPool.Arn
      AccessLogSetting:
        DestinationArn: !GetAtt ApiLogGroup.Arn
        Format: >
          {
            "requestId": "$context.requestId",
            "requestTime": "$context.requestTime",
            "httpMethod": "$context.httpMethod",
            "resourcePath": "$context.resourcePath",
            "status": "$context.status",
            "responseLatency": $context.responseLatency,
            "userAgent": "$context.identity.userAgent",
            "sourceIp": "$context.identity.sourceIp"
          }

  # Lambda Functions
  ApiFunction:
    Type: AWS::Serverless::Function
    Properties:
      FunctionName: !Sub '${Environment}-api-function'
      CodeUri: src/api/
      Handler: handler.main
      MemorySize: 256
      Environment:
        Variables:
          TABLE_NAME: !Ref ItemsTable
          LOG_LEVEL: !Ref LogLevel
      Policies:
        - DynamoDBCrudPolicy:
            TableName: !Ref ItemsTable
      Events:
        GetItems:
          Type: Api
          Properties:
            RestApiId: !Ref ApiGateway
            Path: /items
            Method: get
        GetItem:
          Type: Api
          Properties:
            RestApiId: !Ref ApiGateway
            Path: /items/{id}
            Method: get
        CreateItem:
          Type: Api
          Properties:
            RestApiId: !Ref ApiGateway
            Path: /items
            Method: post
        UpdateItem:
          Type: Api
          Properties:
            RestApiId: !Ref ApiGateway
            Path: /items/{id}
            Method: put
        DeleteItem:
          Type: Api
          Properties:
            RestApiId: !Ref ApiGateway
            Path: /items/{id}
            Method: delete

  FileProcessorFunction:
    Type: AWS::Serverless::Function
    Properties:
      FunctionName: !Sub '${Environment}-file-processor'
      CodeUri: src/file-processor/
      Handler: handler.main
      MemorySize: 1024
      Timeout: 300
      Runtime: python3.11
      Environment:
        Variables:
          METADATA_TABLE: !Ref ItemsTable
      Policies:
        - S3ReadPolicy:
            BucketName: !Ref FilesBucket
        - S3WritePolicy:
            BucketName: !Ref FilesBucket
        - DynamoDBWritePolicy:
            TableName: !Ref ItemsTable
      Layers:
        - !Ref PillowLayer

  # Lambda Layers
  PillowLayer:
    Type: AWS::Serverless::LayerVersion
    Properties:
      LayerName: !Sub '${Environment}-pillow-layer'
      Description: 'PIL/Pillow library for image processing'
      ContentUri: layers/pillow/
      CompatibleRuntimes:
        - python3.11
      RetentionPolicy: Delete

  # Cognito User Pool
  UserPool:
    Type: AWS::Cognito::UserPool
    Properties:
      UserPoolName: !Sub '${Environment}-user-pool'
      AutoVerifiedAttributes:
        - email
      Policies:
        PasswordPolicy:
          MinimumLength: 8
          RequireUppercase: true
          RequireLowercase: true
          RequireNumbers: true
          RequireSymbols: false

  UserPoolClient:
    Type: AWS::Cognito::UserPoolClient
    Properties:
      ClientName: !Sub '${Environment}-app-client'
      UserPoolId: !Ref UserPool
      GenerateSecret: false
      ExplicitAuthFlows:
        - ADMIN_NO_SRP_AUTH
        - USER_PASSWORD_AUTH

  # CloudWatch Log Groups
  ApiLogGroup:
    Type: AWS::Logs::LogGroup
    Properties:
      LogGroupName: !Sub '/aws/apigateway/${Environment}-api'
      RetentionInDays: 14

  # CloudWatch Dashboard
  Dashboard:
    Type: AWS::CloudWatch::Dashboard
    Properties:
      DashboardName: !Sub '${Environment}-serverless-app'
      DashboardBody: !Sub |
        {
          "widgets": [
            {
              "type": "metric",
              "x": 0,
              "y": 0,
              "width": 12,
              "height": 6,
              "properties": {
                "metrics": [
                  [ "AWS/Lambda", "Duration", "FunctionName", "${ApiFunction}" ],
                  [ ".", "Invocations", ".", "." ],
                  [ ".", "Errors", ".", "." ]
                ],
                "period": 300,
                "stat": "Average",
                "region": "${AWS::Region}",
                "title": "API Function Metrics"
              }
            }
          ]
        }

Outputs:
  ApiGatewayUrl:
    Description: 'API Gateway endpoint URL'
    Value: !Sub 'https://${ApiGateway}.execute-api.${AWS::Region}.amazonaws.com/${Environment}'
    Export:
      Name: !Sub '${Environment}-api-url'
  
  FilesBucket:
    Description: 'S3 bucket for file uploads'
    Value: !Ref FilesBucket
    Export:
      Name: !Sub '${Environment}-files-bucket'
  
  UserPoolId:
    Description: 'Cognito User Pool ID'
    Value: !Ref UserPool
    Export:
      Name: !Sub '${Environment}-user-pool-id'
  
  UserPoolClientId:
    Description: 'Cognito User Pool Client ID'
    Value: !Ref UserPoolClient
    Export:
      Name: !Sub '${Environment}-user-pool-client-id'

Terraform Configuration

# main.tf - Configuración Terraform completa
terraform {
  required_version = ">= 1.0"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    archive = {
      source  = "hashicorp/archive"
      version = "~> 2.4"
    }
  }
}

provider "aws" {
  region = var.aws_region
}

# Variables
variable "environment" {
  description = "Environment name"
  type        = string
  default     = "dev"
}

variable "aws_region" {
  description = "AWS region"
  type        = string
  default     = "us-east-1"
}

# Data sources
data "aws_caller_identity" "current" {}

# S3 Bucket for Lambda code
resource "aws_s3_bucket" "lambda_code" {
  bucket = "${var.environment}-lambda-code-${data.aws_caller_identity.current.account_id}"
}

resource "aws_s3_bucket_versioning" "lambda_code" {
  bucket = aws_s3_bucket.lambda_code.id
  versioning_configuration {
    status = "Enabled"
  }
}

# DynamoDB Table
resource "aws_dynamodb_table" "items" {
  name           = "${var.environment}-items"
  billing_mode   = "PAY_PER_REQUEST"
  hash_key       = "id"

  attribute {
    name = "id"
    type = "S"
  }

  stream_enabled   = true
  stream_view_type = "NEW_AND_OLD_IMAGES"

  point_in_time_recovery {
    enabled = true
  }

  tags = {
    Environment = var.environment
    ManagedBy   = "terraform"
  }
}

# IAM Role for Lambda
resource "aws_iam_role" "lambda_role" {
  name = "${var.environment}-lambda-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "lambda.amazonaws.com"
        }
      }
    ]
  })
}

# IAM Policy for Lambda
resource "aws_iam_role_policy" "lambda_policy" {
  name = "${var.environment}-lambda-policy"
  role = aws_iam_role.lambda_role.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ]
        Resource = "arn:aws:logs:*:*:*"
      },
      {
        Effect = "Allow"
        Action = [
          "dynamodb:GetItem",
          "dynamodb:PutItem",
          "dynamodb:Query",
          "dynamodb:Scan",
          "dynamodb:UpdateItem",
          "dynamodb:DeleteItem"
        ]
        Resource = [
          aws_dynamodb_table.items.arn,
          "${aws_dynamodb_table.items.arn}/index/*"
        ]
      },
      {
        Effect = "Allow"
        Action = [
          "xray:PutTraceSegments",
          "xray:PutTelemetryRecords"
        ]
        Resource = "*"
      }
    ]
  })
}

# Package Lambda function
data "archive_file" "api_lambda_zip" {
  type        = "zip"
  source_dir  = "${path.module}/src/api"
  output_path = "${path.module}/builds/api-function.zip"
}

# Lambda Function
resource "aws_lambda_function" "api_function" {
  filename         = data.archive_file.api_lambda_zip.output_path
  function_name    = "${var.environment}-api-function"
  role            = aws_iam_role.lambda_role.arn
  handler         = "index.handler"
  source_code_hash = data.archive_file.api_lambda_zip.output_base64sha256
  runtime         = "nodejs18.x"
  architectures   = ["arm64"]
  memory_size     = 256
  timeout         = 30

  environment {
    variables = {
      TABLE_NAME      = aws_dynamodb_table.items.name
      ENVIRONMENT     = var.environment
      POWERTOOLS_SERVICE_NAME = "api"
    }
  }

  tracing_config {
    mode = "Active"
  }

  depends_on = [
    aws_iam_role_policy.lambda_policy,
    aws_cloudwatch_log_group.api_function_logs
  ]
}

# CloudWatch Log Group
resource "aws_cloudwatch_log_group" "api_function_logs" {
  name              = "/aws/lambda/${var.environment}-api-function"
  retention_in_days = 14
}

# API Gateway
resource "aws_api_gateway_rest_api" "main" {
  name        = "${var.environment}-api"
  description = "Main API for ${var.environment} environment"

  endpoint_configuration {
    types = ["REGIONAL"]
  }
}

# API Gateway Resource
resource "aws_api_gateway_resource" "items" {
  rest_api_id = aws_api_gateway_rest_api.main.id
  parent_id   = aws_api_gateway_rest_api.main.root_resource_id
  path_part   = "items"
}

resource "aws_api_gateway_resource" "item" {
  rest_api_id = aws_api_gateway_rest_api.main.id
  parent_id   = aws_api_gateway_resource.items.id
  path_part   = "{id}"
}

# API Gateway Methods
resource "aws_api_gateway_method" "get_items" {
  rest_api_id   = aws_api_gateway_rest_api.main.id
  resource_id   = aws_api_gateway_resource.items.id
  http_method   = "GET"
  authorization = "NONE"
}

resource "aws_api_gateway_integration" "get_items" {
  rest_api_id             = aws_api_gateway_rest_api.main.id
  resource_id             = aws_api_gateway_resource.items.id
  http_method             = aws_api_gateway_method.get_items.http_method
  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = aws_lambda_function.api_function.invoke_arn
}

# Lambda Permission for API Gateway
resource "aws_lambda_permission" "api_gateway" {
  statement_id  = "AllowAPIGatewayInvoke"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.api_function.function_name
  principal     = "apigateway.amazonaws.com"
  source_arn    = "${aws_api_gateway_rest_api.main.execution_arn}/*/*"
}

# API Gateway Deployment
resource "aws_api_gateway_deployment" "main" {
  depends_on = [
    aws_api_gateway_integration.get_items
  ]

  rest_api_id = aws_api_gateway_rest_api.main.id
  stage_name  = var.environment

  lifecycle {
    create_before_destroy = true
  }
}

# Outputs
output "api_gateway_url" {
  description = "API Gateway URL"
  value       = "${aws_api_gateway_deployment.main.invoke_url}"
}

output "lambda_function_name" {
  description = "Lambda function name"
  value       = aws_lambda_function.api_function.function_name
}

output "dynamodb_table_name" {
  description = "DynamoDB table name"
  value       = aws_dynamodb_table.items.name
}

Optimización de Rendimiento

Cold Start Mitigation

Los cold starts son uno de los principales desafíos en Lambda. Aquí están las estrategias para mitigarlos:

Concurrencia Aprovisionada

# Configurar concurrencia aprovisionada
aws lambda put-provisioned-concurrency-config \
  --function-name my-critical-function \
  --qualifier prod \
  --provisioned-concurrent-executions 10

# Auto-scaling de concurrencia aprovisionada
aws application-autoscaling register-scalable-target \
  --service-namespace lambda \
  --resource-id function:my-function:prod \
  --scalable-dimension lambda:function:ProvisionedConcurrency \
  --min-capacity 5 \
  --max-capacity 50

aws application-autoscaling put-scaling-policy \
  --service-namespace lambda \
  --resource-id function:my-function:prod \
  --scalable-dimension lambda:function:ProvisionedConcurrency \
  --policy-name target-tracking-policy \
  --policy-type TargetTrackingScaling \
  --target-tracking-scaling-policy-configuration '{
    "TargetValue": 0.7,
    "PredefinedMetricSpecification": {
      "PredefinedMetricType": "LambdaProvisionedConcurrencyUtilization"
    },
    "ScaleOutCooldown": 300,
    "ScaleInCooldown": 300
  }'

Optimización del Código

// connection-optimization.js - Optimización de conexiones
const AWS = require('aws-sdk');
const mysql = require('mysql2/promise');

// Conexiones globales (reutilizadas entre invocaciones)
const dynamodb = new AWS.DynamoDB.DocumentClient({
  maxRetries: 3,
  retryDelayOptions: { base: 300 }
});

let dbConnection = null;

// Pool de conexiones HTTP reutilizable
const https = require('https');
const agent = new https.Agent({
  keepAlive: true,
  maxSockets: 50,
  maxFreeSockets: 10,
  timeout: 60000,
  freeSocketTimeout: 30000
});

const httpClient = require('axios').create({
  httpsAgent: agent,
  timeout: 30000
});

async function getDbConnection() {
  if (dbConnection && !dbConnection.connection._closing) {
    return dbConnection;
  }
  
  console.log('Creating new database connection');
  dbConnection = await mysql.createConnection({
    host: process.env.DB_HOST,
    user: process.env.DB_USER,
    password: process.env.DB_PASSWORD,
    database: process.env.DB_NAME,
    charset: 'utf8mb4',
    timezone: '+00:00',
    acquireTimeout: 60000,
    timeout: 60000,
    reconnect: true
  });
  
  return dbConnection;
}

exports.handler = async (event, context) => {
  // Configurar context para evitar que Lambda espere el event loop
  context.callbackWaitsForEmptyEventLoop = false;
  
  try {
    const startTime = Date.now();
    
    // Procesar evento
    const result = await processEvent(event);
    
    const duration = Date.now() - startTime;
    console.log(`Processing completed in ${duration}ms`);
    
    return {
      statusCode: 200,
      body: JSON.stringify({
        success: true,
        duration,
        result
      })
    };
  } catch (error) {
    console.error('Error processing event:', error);
    return {
      statusCode: 500,
      body: JSON.stringify({
        success: false,
        error: error.message
      })
    };
  }
};

async function processEvent(event) {
  const promises = [];
  
  // Operación DynamoDB
  promises.push(dynamodb.get({
    TableName: process.env.TABLE_NAME,
    Key: { id: event.id }
  }).promise());
  
  // Operación de base de datos
  promises.push(queryDatabase(event.query));
  
  // Llamada HTTP externa
  promises.push(httpClient.get('https://api.example.com/data'));
  
  // Ejecutar operaciones en paralelo
  const [dynamoResult, dbResult, httpResult] = await Promise.all(promises);
  
  return {
    dynamoData: dynamoResult.Item,
    dbData: dbResult,
    externalData: httpResult.data
  };
}

async function queryDatabase(query) {
  const db = await getDbConnection();
  const [rows] = await db.execute(query.sql, query.params);
  return rows;
}

Memory and CPU Optimization

# performance_benchmark.py - Benchmark de rendimiento
import time
import json
import boto3
import concurrent.futures
from decimal import Decimal

def lambda_handler(event, context):
    """
    Función de benchmark para optimizar configuración de memoria
    """
    memory_mb = context.memory_limit_in_mb
    start_time = time.time()
    
    # Test computacional intensivo
    computation_result = cpu_intensive_task()
    computation_time = time.time() - start_time
    
    # Test de I/O
    io_start = time.time()
    io_result = io_intensive_task()
    io_time = time.time() - io_start
    
    total_time = time.time() - start_time
    
    # Calcular costo estimado (ARM Graviton2)
    gb_seconds = (memory_mb / 1024) * total_time
    estimated_cost = gb_seconds * 0.0000166667
    
    return {
        'statusCode': 200,
        'body': json.dumps({
            'memory_mb': memory_mb,
            'computation_time': computation_time,
            'io_time': io_time,
            'total_time': total_time,
            'estimated_cost_usd': float(Decimal(str(estimated_cost)).quantize(Decimal('0.000001'))),
            'cost_per_second': float(Decimal(str(estimated_cost / total_time)).quantize(Decimal('0.000001'))),
            'computation_result': computation_result[:100],  # Truncar para response
            'io_operations': len(io_result)
        })
    )

def cpu_intensive_task():
    """Tarea computacionalmente intensiva"""
    result = []
    for i in range(100000):
        # Operaciones matemáticas complejas
        value = sum(x**2 + x**0.5 for x in range(100))
        result.append(value)
    return result

def io_intensive_task():
    """Tarea con operaciones I/O intensivas"""
    s3_client = boto3.client('s3')
    results = []
    
    # Operaciones S3 en paralelo
    with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor:
        futures = []
        
        # Simular múltiples operaciones S3
        for i in range(20):
            future = executor.submit(list_bucket_contents, s3_client, 'my-test-bucket')
            futures.append(future)
        
        for future in concurrent.futures.as_completed(futures):
            try:
                result = future.result(timeout=30)
                results.extend(result)
            except Exception as e:
                print(f"Error in I/O operation: {e}")
    
    return results

def list_bucket_contents(s3_client, bucket_name):
    """Listar contenido de bucket S3"""
    try:
        response = s3_client.list_objects_v2(
            Bucket=bucket_name,
            MaxKeys=100
        )
        return response.get('Contents', [])
    except Exception:
        return []

Step Functions para Orquestación

{
  "Comment": "Flujo de procesamiento de pedidos con manejo de errores",
  "StartAt": "ValidateOrder",
  "States": {
    "ValidateOrder": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:ValidateOrder",
      "Parameters": {
        "orderId.$": "$.orderId",
        "customerId.$": "$.customerId"
      },
      "Retry": [
        {
          "ErrorEquals": ["Lambda.ServiceException", "Lambda.AWSLambdaException"],
          "IntervalSeconds": 2,
          "MaxAttempts": 3,
          "BackoffRate": 2.0
        }
      ],
      "Catch": [
        {
          "ErrorEquals": ["States.TaskFailed"],
          "Next": "OrderValidationFailed",
          "ResultPath": "$.error"
        }
      ],
      "Next": "CheckInventory"
    },
    
    "CheckInventory": {
      "Type": "Parallel",
      "Branches": [
        {
          "StartAt": "CheckProductAvailability",
          "States": {
            "CheckProductAvailability": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:us-east-1:123456789012:function:CheckProductAvailability",
              "End": true
            }
          }
        },
        {
          "StartAt": "CheckWarehouseCapacity",
          "States": {
            "CheckWarehouseCapacity": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:us-east-1:123456789012:function:CheckWarehouseCapacity",
              "End": true
            }
          }
        }
      ],
      "Next": "EvaluateInventoryResults",
      "Catch": [
        {
          "ErrorEquals": ["States.ALL"],
          "Next": "InventoryCheckFailed"
        }
      ]
    },
    
    "EvaluateInventoryResults": {
      "Type": "Choice",
      "Choices": [
        {
          "And": [
            {
              "Variable": "$[0].available",
              "BooleanEquals": true
            },
            {
              "Variable": "$[1].capacity_available",
              "BooleanEquals": true
            }
          ],
          "Next": "ProcessPayment"
        }
      ],
      "Default": "HandleInventoryShortage"
    },
    
    "ProcessPayment": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:ProcessPayment",
      "Parameters": {
        "orderId.$": "$$.Execution.Input.orderId",
        "amount.$": "$$.Execution.Input.amount",
        "paymentMethod.$": "$$.Execution.Input.paymentMethod"
      },
      "Retry": [
        {
          "ErrorEquals": ["PaymentProcessingException"],
          "IntervalSeconds": 5,
          "MaxAttempts": 3,
          "BackoffRate": 2.0
        }
      ],
      "Catch": [
        {
          "ErrorEquals": ["PaymentDeclinedException"],
          "Next": "PaymentDeclined"
        },
        {
          "ErrorEquals": ["States.ALL"],
          "Next": "PaymentProcessingFailed"
        }
      ],
      "Next": "ReserveInventory"
    },
    
    "ReserveInventory": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:ReserveInventory",
      "Next": "CreateShippingLabel"
    },
    
    "CreateShippingLabel": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:CreateShippingLabel",
      "Next": "SendOrderConfirmation"
    },
    
    "SendOrderConfirmation": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:SendOrderConfirmation",
      "Next": "OrderProcessingComplete"
    },
    
    "OrderProcessingComplete": {
      "Type": "Succeed",
      "Result": {
        "status": "completed",
        "message": "Order processed successfully"
      }
    },
    
    "HandleInventoryShortage": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:HandleInventoryShortage",
      "Next": "InventoryShortageNotification"
    },
    
    "InventoryShortageNotification": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:SendNotification",
      "Parameters": {
        "message": "Inventory shortage detected for order",
        "orderId.$": "$$.Execution.Input.orderId"
      },
      "End": true
    },
    
    "OrderValidationFailed": {
      "Type": "Fail",
      "Cause": "Order validation failed"
    },
    
    "PaymentDeclined": {
      "Type": "Fail",
      "Cause": "Payment was declined"
    },
    
    "PaymentProcessingFailed": {
      "Type": "Fail",
      "Cause": "Payment processing encountered an error"
    },
    
    "InventoryCheckFailed": {
      "Type": "Fail",
      "Cause": "Inventory check failed"
    }
  }
}

Monitoreo y Observabilidad

CloudWatch Metrics y Alarms

# monitoring_setup.py - Configuración de monitoreo
import boto3
import json

def create_lambda_monitoring(function_name, sns_topic_arn):
    """
    Crear métricas y alarmas comprehensivas para Lambda
    """
    cloudwatch = boto3.client('cloudwatch')
    
    # Configuraciones de alarmas
    alarms = [
        {
            'AlarmName': f'{function_name}-ErrorRate',
            'AlarmDescription': f'Error rate for {function_name}',
            'MetricName': 'Errors',
            'Statistic': 'Sum',
            'Threshold': 1,
            'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
            'EvaluationPeriods': 2,
            'Period': 60
        },
        {
            'AlarmName': f'{function_name}-Duration',
            'AlarmDescription': f'Duration for {function_name}',
            'MetricName': 'Duration',
            'Statistic': 'Average',
            'Threshold': 10000,  # 10 seconds
            'ComparisonOperator': 'GreaterThanThreshold',
            'EvaluationPeriods': 3,
            'Period': 300
        },
        {
            'AlarmName': f'{function_name}-Throttles',
            'AlarmDescription': f'Throttles for {function_name}',
            'MetricName': 'Throttles',
            'Statistic': 'Sum',
            'Threshold': 1,
            'ComparisonOperator': 'GreaterThanOrEqualToThreshold',
            'EvaluationPeriods': 1,
            'Period': 60
        },
        {
            'AlarmName': f'{function_name}-ConcurrentExecutions',
            'AlarmDescription': f'Concurrent executions for {function_name}',
            'MetricName': 'ConcurrentExecutions',
            'Statistic': 'Maximum',
            'Threshold': 800,
            'ComparisonOperator': 'GreaterThanThreshold',
            'EvaluationPeriods': 2,
            'Period': 60
        }
    ]
    
    for alarm_config in alarms:
        cloudwatch.put_metric_alarm(
            AlarmName=alarm_config['AlarmName'],
            AlarmDescription=alarm_config['AlarmDescription'],
            ActionsEnabled=True,
            AlarmActions=[sns_topic_arn],
            MetricName=alarm_config['MetricName'],
            Namespace='AWS/Lambda',
            Statistic=alarm_config['Statistic'],
            Dimensions=[
                {
                    'Name': 'FunctionName',
                    'Value': function_name
                }
            ],
            Period=alarm_config['Period'],
            EvaluationPeriods=alarm_config['EvaluationPeriods'],
            Threshold=alarm_config['Threshold'],
            ComparisonOperator=alarm_config['ComparisonOperator'],
            TreatMissingData='notBreaching'
        )
    
    print(f"Created {len(alarms)} alarms for {function_name}")

def create_custom_dashboard(functions_list, dashboard_name):
    """
    Crear dashboard personalizado para múltiples funciones Lambda
    """
    cloudwatch = boto3.client('cloudwatch')
    
    widgets = []
    
    # Widget de invocaciones
    widgets.append({
        "type": "metric",
        "x": 0,
        "y": 0,
        "width": 12,
        "height": 6,
        "properties": {
            "metrics": [
                ["AWS/Lambda", "Invocations", "FunctionName", func] 
                for func in functions_list
            ],
            "period": 300,
            "stat": "Sum",
            "region": "us-east-1",
            "title": "Lambda Invocations"
        }
    })
    
    # Widget de errores
    widgets.append({
        "type": "metric",
        "x": 12,
        "y": 0,
        "width": 12,
        "height": 6,
        "properties": {
            "metrics": [
                ["AWS/Lambda", "Errors", "FunctionName", func] 
                for func in functions_list
            ],
            "period": 300,
            "stat": "Sum",
            "region": "us-east-1",
            "title": "Lambda Errors"
        }
    })
    
    # Widget de duración
    widgets.append({
        "type": "metric",
        "x": 0,
        "y": 6,
        "width": 12,
        "height": 6,
        "properties": {
            "metrics": [
                ["AWS/Lambda", "Duration", "FunctionName", func] 
                for func in functions_list
            ],
            "period": 300,
            "stat": "Average",
            "region": "us-east-1",
            "title": "Lambda Duration (Average)"
        }
    })
    
    # Widget de costos estimados
    widgets.append({
        "type": "metric",
        "x": 12,
        "y": 6,
        "width": 12,
        "height": 6,
        "properties": {
            "metrics": [
                ["AWS/Lambda", "Duration", "FunctionName", func, {"stat": "Sum"}] 
                for func in functions_list
            ],
            "period": 3600,
            "region": "us-east-1",
            "title": "Estimated Costs (Duration Sum)",
            "yAxis": {
                "left": {
                    "label": "Duration (ms-hours)"
                }
            }
        }
    })
    
    dashboard_body = {
        "widgets": widgets
    }
    
    cloudwatch.put_dashboard(
        DashboardName=dashboard_name,
        DashboardBody=json.dumps(dashboard_body)
    )
    
    print(f"Created dashboard: {dashboard_name}")

AWS X-Ray Tracing

// xray-tracing.js - Implementación completa de X-Ray tracing
const AWSXRay = require('aws-xray-sdk-core');
const AWS = AWSXRay.captureAWS(require('aws-sdk'));

// Configurar subsegmentos automáticos
AWSXRay.capturePromise();

const dynamodb = new AWS.DynamoDB.DocumentClient();
const s3 = new AWS.S3();

exports.handler = async (event, context) => {
    console.log('Function invoked with event:', JSON.stringify(event));
    
    // Obtener segmento raíz
    const segment = AWSXRay.getSegment();
    
    // Agregar anotaciones (indexables en X-Ray)
    segment.addAnnotation('Environment', process.env.ENVIRONMENT);
    segment.addAnnotation('FunctionVersion', context.functionVersion);
    segment.addAnnotation('EventSource', event.source || 'unknown');
    
    // Agregar metadata (no indexable, pero visible en traces)
    segment.addMetadata('event', event);
    segment.addMetadata('context', {
        functionName: context.functionName,
        functionVersion: context.functionVersion,
        memoryLimitInMB: context.memoryLimitInMB,
        remainingTimeInMillis: context.getRemainingTimeInMillis()
    });
    
    try {
        // Procesamiento con subsegmentos personalizados
        const result = await processWithTracing(event);
        
        segment.addAnnotation('Success', true);
        segment.addMetadata('result', result);
        
        return {
            statusCode: 200,
            body: JSON.stringify(result)
        };
    } catch (error) {
        console.error('Error processing event:', error);
        
        // Capturar error en X-Ray
        segment.addError(error);
        segment.addAnnotation('Success', false);
        
        return {
            statusCode: 500,
            body: JSON.stringify({
                error: error.message
            })
        };
    }
};

async function processWithTracing(event) {
    // Subsegmento para validación
    const validationSegment = AWSXRay.getSegment().addNewSubsegment('Validation');
    
    try {
        const validationResult = validateEvent(event);
        validationSegment.addAnnotation('Valid', validationResult.isValid);
        validationSegment.addMetadata('validationDetails', validationResult);
        
        if (!validationResult.isValid) {
            throw new Error(`Validation failed: ${validationResult.errors.join(', ')}`);
        }
        
        validationSegment.close();
    } catch (error) {
        validationSegment.addError(error);
        validationSegment.close();
        throw error;
    }
    
    // Subsegmento para operaciones de base de datos
    const databaseSegment = AWSXRay.getSegment().addNewSubsegment('DatabaseOperations');
    let dbResults;
    
    try {
        dbResults = await Promise.all([
            getUserData(event.userId),
            getPreferences(event.userId)
        ]);
        
        databaseSegment.addAnnotation('RecordsRetrieved', dbResults.length);
        databaseSegment.addMetadata('dbResults', dbResults);
        databaseSegment.close();
    } catch (error) {
        databaseSegment.addError(error);
        databaseSegment.close();
        throw error;
    }
    
    // Subsegmento para lógica de negocio
    const businessLogicSegment = AWSXRay.getSegment().addNewSubsegment('BusinessLogic');
    let processedData;
    
    try {
        processedData = await processBusinessLogic(event, dbResults);
        
        businessLogicSegment.addAnnotation('ItemsProcessed', processedData.items.length);
        businessLogicSegment.addMetadata('processedData', processedData);
        businessLogicSegment.close();
    } catch (error) {
        businessLogicSegment.addError(error);
        businessLogicSegment.close();
        throw error;
    }
    
    // Subsegmento para almacenamiento
    const storageSegment = AWSXRay.getSegment().addNewSubsegment('Storage');
    
    try {
        await storeResults(processedData);
        
        storageSegment.addAnnotation('StorageSuccess', true);
        storageSegment.close();
    } catch (error) {
        storageSegment.addError(error);
        storageSegment.close();
        throw error;
    }
    
    return {
        success: true,
        itemsProcessed: processedData.items.length,
        timestamp: new Date().toISOString()
    };
}

function validateEvent(event) {
    const errors = [];
    
    if (!event.userId) {
        errors.push('userId is required');
    }
    
    if (!event.action) {
        errors.push('action is required');
    }
    
    return {
        isValid: errors.length === 0,
        errors
    };
}

async function getUserData(userId) {
    // X-Ray capturará automáticamente esta llamada DynamoDB
    const params = {
        TableName: process.env.USERS_TABLE,
        Key: { userId }
    };
    
    const result = await dynamodb.get(params).promise();
    return result.Item;
}

async function getPreferences(userId) {
    const params = {
        TableName: process.env.PREFERENCES_TABLE,
        Key: { userId }
    };
    
    const result = await dynamodb.get(params).promise();
    return result.Item || {};
}

async function processBusinessLogic(event, dbResults) {
    const [userData, preferences] = dbResults;
    
    // Simular procesamiento complejo
    const items = event.items || [];
    const processedItems = items.map(item => ({
        ...item,
        processed: true,
        timestamp: Date.now(),
        userId: event.userId,
        preferences: preferences
    }));
    
    return {
        items: processedItems,
        userData: userData
    };
}

async function storeResults(data) {
    const params = {
        TableName: process.env.RESULTS_TABLE,
        Item: {
            id: generateId(),
            userId: data.userData.userId,
            results: data.items,
            createdAt: new Date().toISOString()
        }
    };
    
    await dynamodb.put(params).promise();
    
    // También almacenar en S3 para backup
    await s3.putObject({
        Bucket: process.env.BACKUP_BUCKET,
        Key: `results/${params.Item.id}.json`,
        Body: JSON.stringify(data),
        ContentType: 'application/json'
    }).promise();
}

function generateId() {
    return Date.now().toString(36) + Math.random().toString(36).substr(2);
}

Structured Logging

// structured-logging.js - Logging estructurado con AWS Lambda Powertools
const { Logger } = require('@aws-lambda-powertools/logger');
const { Metrics, MetricUnits } = require('@aws-lambda-powertools/metrics');
const { Tracer } = require('@aws-lambda-powertools/tracer');

// Inicializar Powertools
const logger = new Logger({
    serviceName: 'order-processing',
    logLevel: process.env.LOG_LEVEL || 'INFO'
});

const metrics = new Metrics({
    namespace: 'OrderProcessing',
    serviceName: 'order-processing',
    defaultDimensions: {
        environment: process.env.ENVIRONMENT
    }
});

const tracer = new Tracer({ 
    serviceName: 'order-processing',
    captureHTTPsRequests: true
});

exports.handler = async (event, context) => {
    // Agregar información de contexto persistente
    logger.addPersistentLogAttributes({
        requestId: context.awsRequestId,
        functionName: context.functionName,
        functionVersion: context.functionVersion
    });
    
    logger.info('Function invocation started', { event });
    
    try {
        const result = await processOrder(event);
        
        // Métricas de éxito
        metrics.addMetric('OrderProcessed', MetricUnits.Count, 1);
        metrics.addMetric('ProcessingDuration', MetricUnits.Milliseconds, 
            Date.now() - event.timestamp);
        
        logger.info('Order processing completed', { 
            orderId: event.orderId,
            result 
        });
        
        return {
            statusCode: 200,
            body: JSON.stringify(result)
        };
    } catch (error) {
        // Métricas de error
        metrics.addMetric('OrderProcessingError', MetricUnits.Count, 1);
        
        logger.error('Order processing failed', {
            orderId: event.orderId,
            error: error.message,
            stack: error.stack
        });
        
        return {
            statusCode: 500,
            body: JSON.stringify({
                error: 'Order processing failed'
            })
        };
    } finally {
        // Publicar métricas
        metrics.publishStoredMetrics();
    }
};

const processOrder = tracer.captureAsyncFunc('processOrder', async (event) => {
    const subsegment = tracer.getSegment().addNewSubsegment('orderValidation');
    
    try {
        // Validar orden
        const validation = validateOrder(event.order);
        if (!validation.isValid) {
            throw new Error(`Invalid order: ${validation.errors.join(', ')}`);
        }
        
        logger.info('Order validation successful', {
            orderId: event.orderId,
            validation
        });
        
        subsegment.addAnnotation('validationSuccess', true);
        subsegment.close();
    } catch (error) {
        subsegment.addError(error);
        subsegment.close();
        throw error;
    }
    
    // Procesar pago
    const paymentResult = await tracer.captureAsyncFunc('processPayment', 
        async () => {
            logger.info('Processing payment', {
                orderId: event.orderId,
                amount: event.order.total
            });
            
            // Simular procesamiento de pago
            await new Promise(resolve => setTimeout(resolve, 500));
            
            const success = Math.random() > 0.1; // 90% success rate
            
            if (!success) {
                metrics.addMetric('PaymentFailure', MetricUnits.Count, 1);
                throw new Error('Payment processing failed');
            }
            
            metrics.addMetric('PaymentSuccess', MetricUnits.Count, 1);
            metrics.addMetric('PaymentAmount', MetricUnits.None, event.order.total);
            
            return {
                transactionId: generateTransactionId(),
                amount: event.order.total,
                status: 'completed'
            };
        }
    );
    
    logger.info('Payment processed successfully', {
        orderId: event.orderId,
        transactionId: paymentResult.transactionId
    });
    
    // Actualizar inventario
    const inventoryResult = await tracer.captureAsyncFunc('updateInventory',
        async () => {
            logger.info('Updating inventory', {
                orderId: event.orderId,
                items: event.order.items
            });
            
            // Simular actualización de inventario
            const updates = event.order.items.map(item => ({
                productId: item.productId,
                quantity: item.quantity,
                reserved: true
            }));
            
            metrics.addMetric('InventoryUpdated', MetricUnits.Count, updates.length);
            
            return { updates };
        }
    );
    
    return {
        orderId: event.orderId,
        status: 'completed',
        payment: paymentResult,
        inventory: inventoryResult,
        processedAt: new Date().toISOString()
    };
});

function validateOrder(order) {
    const errors = [];
    
    if (!order.customerId) {
        errors.push('Customer ID is required');
    }
    
    if (!order.items || order.items.length === 0) {
        errors.push('Order must contain at least one item');
    }
    
    if (!order.total || order.total <= 0) {
        errors.push('Order total must be greater than zero');
    }
    
    logger.debug('Order validation completed', {
        isValid: errors.length === 0,
        errors: errors
    });
    
    return {
        isValid: errors.length === 0,
        errors
    };
}

function generateTransactionId() {
    return 'txn_' + Date.now() + '_' + Math.random().toString(36).substr(2, 9);
}

Seguridad y Mejores Prácticas

IAM Roles y Políticas

# security-iam-template.yaml - Plantilla completa de seguridad IAM
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Comprehensive IAM security template for Lambda functions'

Parameters:
  Environment:
    Type: String
    Default: dev
    AllowedValues: [dev, test, prod]

Resources:
  # Rol base para Lambda con permisos mínimos
  BaseLambdaExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Sub '${Environment}-lambda-base-execution-role'
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
            Action: sts:AssumeRole
            Condition:
              StringEquals:
                'aws:RequestedRegion': !Ref 'AWS::Region'
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
      Tags:
        - Key: Environment
          Value: !Ref Environment

  # Política para acceso a DynamoDB con acciones específicas
  DynamoDBAccessPolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
      ManagedPolicyName: !Sub '${Environment}-lambda-dynamodb-access'
      Description: 'DynamoDB access policy for Lambda functions'
      PolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Sid: DynamoDBReadAccess
            Effect: Allow
            Action:
              - dynamodb:GetItem
              - dynamodb:Query
              - dynamodb:BatchGetItem
            Resource:
              - !Sub 'arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${Environment}-*'
              - !Sub 'arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${Environment}-*/index/*'
            Condition:
              StringEquals:
                'dynamodb:LeadingKeys': ['${aws:userid}']
          
          - Sid: DynamoDBWriteAccess
            Effect: Allow
            Action:
              - dynamodb:PutItem
              - dynamodb:UpdateItem
              - dynamodb:DeleteItem
            Resource:
              - !Sub 'arn:aws:dynamodb:${AWS::Region}:${AWS::AccountId}:table/${Environment}-*'
            Condition:
              StringEquals:
                'dynamodb:LeadingKeys': ['${aws:userid}']
              ForAllValues:StringLike:
                'dynamodb:Attributes':
                  - 'userId'
                  - 'createdAt'
                  - 'updatedAt'
                  - 'data'

  # Política para acceso restringido a S3
  S3AccessPolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
      ManagedPolicyName: !Sub '${Environment}-lambda-s3-access'
      Description: 'Restricted S3 access policy for Lambda functions'
      PolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Sid: S3BucketAccess
            Effect: Allow
            Action:
              - s3:ListBucket
            Resource: !Sub 'arn:aws:s3:::${Environment}-lambda-*'
            Condition:
              StringLike:
                's3:prefix': 
                  - 'uploads/*'
                  - 'processed/*'
          
          - Sid: S3ObjectReadAccess
            Effect: Allow
            Action:
              - s3:GetObject
              - s3:GetObjectVersion
            Resource: !Sub 'arn:aws:s3:::${Environment}-lambda-*/uploads/*'
            Condition:
              StringEquals:
                's3:ExistingObjectTag/Environment': !Ref Environment
          
          - Sid: S3ObjectWriteAccess
            Effect: Allow
            Action:
              - s3:PutObject
              - s3:PutObjectAcl
              - s3:PutObjectTagging
            Resource: !Sub 'arn:aws:s3:::${Environment}-lambda-*/processed/*'
            Condition:
              StringEquals:
                's3:x-amz-server-side-encryption': 'AES256'

  # Política para acceso a Secrets Manager
  SecretsManagerPolicy:
    Type: AWS::IAM::ManagedPolicy
    Properties:
      ManagedPolicyName: !Sub '${Environment}-lambda-secrets-access'
      Description: 'Secrets Manager access policy for Lambda functions'
      PolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Sid: SecretsManagerAccess
            Effect: Allow
            Action:
              - secretsmanager:GetSecretValue
            Resource: !Sub 'arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:${Environment}/*'
            Condition:
              StringEquals:
                'secretsmanager:ResourceTag/Environment': !Ref Environment

  # Rol específico para funciones API
  ApiLambdaRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Sub '${Environment}-api-lambda-role'
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
        - !Ref DynamoDBAccessPolicy
        - !Ref SecretsManagerPolicy
      Tags:
        - Key: Environment
          Value: !Ref Environment

  # Rol específico para funciones de procesamiento
  ProcessingLambdaRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Sub '${Environment}-processing-lambda-role'
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
        - !Ref S3AccessPolicy
        - !Ref DynamoDBAccessPolicy
      Tags:
        - Key: Environment
          Value: !Ref Environment

Outputs:
  ApiLambdaRoleArn:
    Description: 'ARN of the API Lambda execution role'
    Value: !GetAtt ApiLambdaRole.Arn
    Export:
      Name: !Sub '${Environment}-api-lambda-role-arn'

  ProcessingLambdaRoleArn:
    Description: 'ARN of the Processing Lambda execution role'
    Value: !GetAtt ProcessingLambdaRole.Arn
    Export:
      Name: !Sub '${Environment}-processing-lambda-role-arn'

Gestión Segura de Secretos

# secrets_management.py - Gestión segura de secretos
import boto3
import json
import os
from botocore.exceptions import ClientError
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

class SecretsManager:
    def __init__(self):
        self.secrets_client = boto3.client('secretsmanager')
        self.parameter_client = boto3.client('ssm')
        self._secret_cache = {}
        self._parameter_cache = {}
    
    def get_secret(self, secret_name, version_stage='AWSCURRENT'):
        """
        Obtener secreto de AWS Secrets Manager con caché
        """
        cache_key = f"{secret_name}:{version_stage}"
        
        if cache_key in self._secret_cache:
            return self._secret_cache[cache_key]
        
        try:
            response = self.secrets_client.get_secret_value(
                SecretId=secret_name,
                VersionStage=version_stage
            )
            
            # Parsear JSON si es necesario
            if 'SecretString' in response:
                secret = response['SecretString']
                try:
                    secret = json.loads(secret)
                except json.JSONDecodeError:
                    pass  # Mantener como string si no es JSON válido
            else:
                secret = response['SecretBinary']
            
            # Cachear el secreto
            self._secret_cache[cache_key] = secret
            logger.info(f"Secret {secret_name} retrieved successfully")
            
            return secret
        
        except ClientError as e:
            logger.error(f"Error retrieving secret {secret_name}: {e}")
            raise

    def get_parameter(self, parameter_name, with_decryption=True):
        """
        Obtener parámetro de AWS Systems Manager Parameter Store
        """
        cache_key = f"{parameter_name}:{with_decryption}"
        
        if cache_key in self._parameter_cache:
            return self._parameter_cache[cache_key]
        
        try:
            response = self.parameter_client.get_parameter(
                Name=parameter_name,
                WithDecryption=with_decryption
            )
            
            value = response['Parameter']['Value']
            
            # Cachear el parámetro
            self._parameter_cache[cache_key] = value
            logger.info(f"Parameter {parameter_name} retrieved successfully")
            
            return value
        
        except ClientError as e:
            logger.error(f"Error retrieving parameter {parameter_name}: {e}")
            raise

    def get_database_config(self):
        """
        Obtener configuración de base de datos completa
        """
        try:
            db_secret = self.get_secret(f"{os.environ['ENVIRONMENT']}/database")
            
            return {
                'host': db_secret['host'],
                'port': db_secret['port'],
                'database': db_secret['dbname'],
                'username': db_secret['username'],
                'password': db_secret['password'],
                'ssl_mode': db_secret.get('ssl_mode', 'require')
            }
        except Exception as e:
            logger.error(f"Error getting database config: {e}")
            raise

    def get_api_keys(self):
        """
        Obtener claves API externas
        """
        try:
            api_secret = self.get_secret(f"{os.environ['ENVIRONMENT']}/api-keys")
            
            return {
                'payment_gateway': api_secret['payment_gateway'],
                'email_service': api_secret['email_service'],
                'analytics': api_secret.get('analytics'),
                'third_party_api': api_secret.get('third_party_api')
            }
        except Exception as e:
            logger.error(f"Error getting API keys: {e}")
            raise

# Inicializar globalmente para reutilización
secrets_manager = SecretsManager()

def lambda_handler(event, context):
    """
    Handler principal con gestión segura de secretos
    """
    try:
        # Obtener configuraciones necesarias
        db_config = secrets_manager.get_database_config()
        api_keys = secrets_manager.get_api_keys()
        
        # Parámetros de configuración no sensibles
        max_retries = int(secrets_manager.get_parameter(
            f"/{os.environ['ENVIRONMENT']}/app/max-retries",
            with_decryption=False
        ))
        
        # Procesar evento
        result = process_secure_event(event, db_config, api_keys, max_retries)
        
        return {
            'statusCode': 200,
            'body': json.dumps(result)
        }
    
    except Exception as e:
        logger.error(f"Error processing event: {e}")
        return {
            'statusCode': 500,
            'body': json.dumps({'error': 'Internal server error'})
        }

def process_secure_event(event, db_config, api_keys, max_retries):
    """
    Procesar evento con configuraciones seguras
    """
    # Usar configuraciones sin exponerlas en logs
    logger.info("Processing event with secure configurations")
    
    # Ejemplo: conectar a base de datos
    connection = create_database_connection(db_config)
    
    # Ejemplo: llamar API externa
    external_data = call_external_api(api_keys['third_party_api'])
    
    return {
        'processed': True,
        'timestamp': context.aws_request_id if 'context' in globals() else 'unknown',
        'external_data_count': len(external_data) if external_data else 0
    }

def create_database_connection(db_config):
    """
    Crear conexión a base de datos de manera segura
    """
    # Implementación de conexión (pseudo-código)
    logger.info("Creating database connection")
    
    # En implementación real, usar biblioteca como psycopg2, pymysql, etc.
    return {
        'status': 'connected',
        'host': db_config['host'],  # OK mostrar host en logs
        'database': db_config['database']  # OK mostrar nombre de BD
        # NUNCA loggear username/password
    }

def call_external_api(api_key):
    """
    Llamar API externa de manera segura
    """
    # Implementación de llamada API (pseudo-código)
    logger.info("Calling external API")
    
    headers = {
        'Authorization': f'Bearer {api_key}',
        'Content-Type': 'application/json'
    }
    
    # En implementación real, usar requests o similar
    # NUNCA loggear headers que contengan secrets
    
    return [{'id': 1, 'data': 'sample'}]  # Datos de ejemplo

Cifrado y Network Security

# network-security.yaml - Configuración de red y cifrado
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Network security configuration for Lambda functions'

Parameters:
  Environment:
    Type: String
    Default: dev

Resources:
  # VPC para funciones Lambda
  LambdaVPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: 10.0.0.0/16
      EnableDnsHostnames: true
      EnableDnsSupport: true
      Tags:
        - Key: Name
          Value: !Sub '${Environment}-lambda-vpc'
        - Key: Environment
          Value: !Ref Environment

  # Subnets privadas
  PrivateSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref LambdaVPC
      CidrBlock: 10.0.1.0/24
      AvailabilityZone: !Select [0, !GetAZs '']
      Tags:
        - Key: Name
          Value: !Sub '${Environment}-private-subnet-1'

  PrivateSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref LambdaVPC
      CidrBlock: 10.0.2.0/24
      AvailabilityZone: !Select [1, !GetAZs '']
      Tags:
        - Key: Name
          Value: !Sub '${Environment}-private-subnet-2'

  # NAT Gateway para acceso a Internet
  NATGatewayEIP:
    Type: AWS::EC2::EIP
    Properties:
      Domain: vpc

  PublicSubnet:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref LambdaVPC
      CidrBlock: 10.0.100.0/24
      AvailabilityZone: !Select [0, !GetAZs '']
      MapPublicIpOnLaunch: true
      Tags:
        - Key: Name
          Value: !Sub '${Environment}-public-subnet'

  InternetGateway:
    Type: AWS::EC2::InternetGateway

  AttachGateway:
    Type: AWS::EC2::VPCGatewayAttachment
    Properties:
      VpcId: !Ref LambdaVPC
      InternetGatewayId: !Ref InternetGateway

  PublicRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref LambdaVPC

  PublicRoute:
    Type: AWS::EC2::Route
    DependsOn: AttachGateway
    Properties:
      RouteTableId: !Ref PublicRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref InternetGateway

  PublicSubnetRouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PublicSubnet
      RouteTableId: !Ref PublicRouteTable

  NATGateway:
    Type: AWS::EC2::NatGateway
    Properties:
      AllocationId: !GetAtt NATGatewayEIP.AllocationId
      SubnetId: !Ref PublicSubnet

  PrivateRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref LambdaVPC

  PrivateRoute:
    Type: AWS::EC2::Route
    Properties:
      RouteTableId: !Ref PrivateRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      NatGatewayId: !Ref NATGateway

  PrivateSubnetRouteTableAssociation1:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PrivateSubnet1
      RouteTableId: !Ref PrivateRouteTable

  PrivateSubnetRouteTableAssociation2:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      SubnetId: !Ref PrivateSubnet2
      RouteTableId: !Ref PrivateRouteTable

  # Security Groups
  LambdaSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Security group for Lambda functions
      VpcId: !Ref LambdaVPC
      SecurityGroupEgress:
        - IpProtocol: tcp
          FromPort: 443
          ToPort: 443
          CidrIp: 0.0.0.0/0
          Description: HTTPS outbound
        - IpProtocol: tcp
          FromPort: 5432
          ToPort: 5432
          SourceSecurityGroupId: !Ref DatabaseSecurityGroup
          Description: PostgreSQL to RDS
      Tags:
        - Key: Name
          Value: !Sub '${Environment}-lambda-sg'

  DatabaseSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Security group for RDS database
      VpcId: !Ref LambdaVPC
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: 5432
          ToPort: 5432
          SourceSecurityGroupId: !Ref LambdaSecurityGroup
          Description: PostgreSQL from Lambda
      Tags:
        - Key: Name
          Value: !Sub '${Environment}-database-sg'

  # KMS Key para cifrado de Lambda
  LambdaKMSKey:
    Type: AWS::KMS::Key
    Properties:
      Description: !Sub 'KMS key for Lambda function encryption in ${Environment}'
      KeyPolicy:
        Version: '2012-10-17'
        Statement:
          - Sid: Enable IAM User Permissions
            Effect: Allow
            Principal:
              AWS: !Sub 'arn:aws:iam::${AWS::AccountId}:root'
            Action: 'kms:*'
            Resource: '*'
          - Sid: Allow Lambda service
            Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
            Action:
              - kms:Decrypt
              - kms:DescribeKey
            Resource: '*'
            Condition:
              StringEquals:
                'kms:ViaService': !Sub 'lambda.${AWS::Region}.amazonaws.com'

  LambdaKMSKeyAlias:
    Type: AWS::KMS::Alias
    Properties:
      AliasName: !Sub 'alias/${Environment}-lambda-key'
      TargetKeyId: !Ref LambdaKMSKey

  # VPC Endpoints para reducir tráfico por NAT Gateway
  S3VPCEndpoint:
    Type: AWS::EC2::VPCEndpoint
    Properties:
      VpcId: !Ref LambdaVPC
      ServiceName: !Sub 'com.amazonaws.${AWS::Region}.s3'
      VpcEndpointType: Gateway
      RouteTableIds:
        - !Ref PrivateRouteTable

  DynamoDBVPCEndpoint:
    Type: AWS::EC2::VPCEndpoint
    Properties:
      VpcId: !Ref LambdaVPC
      ServiceName: !Sub 'com.amazonaws.${AWS::Region}.dynamodb'
      VpcEndpointType: Gateway
      RouteTableIds:
        - !Ref PrivateRouteTable

  SecretsManagerVPCEndpoint:
    Type: AWS::EC2::VPCEndpoint
    Properties:
      VpcId: !Ref LambdaVPC
      ServiceName: !Sub 'com.amazonaws.${AWS::Region}.secretsmanager'
      VpcEndpointType: Interface
      SubnetIds:
        - !Ref PrivateSubnet1
        - !Ref PrivateSubnet2
      SecurityGroupIds:
        - !Ref VPCEndpointSecurityGroup
      PrivateDnsEnabled: true

  VPCEndpointSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Security group for VPC endpoints
      VpcId: !Ref LambdaVPC
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: 443
          ToPort: 443
          SourceSecurityGroupId: !Ref LambdaSecurityGroup
          Description: HTTPS from Lambda functions

Outputs:
  VPCId:
    Description: 'VPC ID for Lambda functions'
    Value: !Ref LambdaVPC
    Export:
      Name: !Sub '${Environment}-lambda-vpc-id'

  PrivateSubnetIds:
    Description: 'Private subnet IDs for Lambda functions'
    Value: !Join [',', [!Ref PrivateSubnet1, !Ref PrivateSubnet2]]
    Export:
      Name: !Sub '${Environment}-lambda-private-subnets'

  LambdaSecurityGroupId:
    Description: 'Security group ID for Lambda functions'
    Value: !Ref LambdaSecurityGroup
    Export:
      Name: !Sub '${Environment}-lambda-security-group'

  KMSKeyId:
    Description: 'KMS key ID for Lambda encryption'
    Value: !Ref LambdaKMSKey
    Export:
      Name: !Sub '${Environment}-lambda-kms-key'

Casos de Uso Avanzados

Arquitectura Event-Driven Completa

# event_driven_architecture.py - Sistema event-driven completo
import json
import boto3
import asyncio
from typing import Dict, List, Any
import uuid
from datetime import datetime
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

# Clients AWS
eventbridge = boto3.client('events')
sqs = boto3.client('sqs')
sns = boto3.client('sns')
dynamodb = boto3.resource('dynamodb')

class EventProcessor:
    def __init__(self):
        self.event_bus = os.environ.get('EVENT_BUS_NAME', 'default')
        self.dead_letter_queue = os.environ.get('DLQ_URL')
        
    async def process_event(self, event: Dict[str, Any]) -> Dict[str, Any]:
        """
        Procesar evento de manera asíncrona con patrones event-driven
        """
        event_type = event.get('event_type')
        correlation_id = event.get('correlation_id', str(uuid.uuid4()))
        
        logger.info(f"Processing event {event_type} with correlation ID {correlation_id}")
        
        try:
            # Procesar según tipo de evento
            if event_type == 'order.created':
                return await self.handle_order_created(event, correlation_id)
            elif event_type == 'payment.processed':
                return await self.handle_payment_processed(event, correlation_id)
            elif event_type == 'inventory.updated':
                return await self.handle_inventory_updated(event, correlation_id)
            else:
                raise ValueError(f"Unknown event type: {event_type}")
        
        except Exception as e:
            await self.handle_processing_error(event, correlation_id, str(e))
            raise

    async def handle_order_created(self, event: Dict, correlation_id: str) -> Dict:
        """
        Manejar evento de creación de orden
        """
        order_data = event['data']
        
        # Guardar orden en base de datos
        await self.save_order(order_data)
        
        # Publicar eventos derivados
        events_to_publish = [
            {
                'event_type': 'inventory.check_requested',
                'correlation_id': correlation_id,
                'data': {
                    'order_id': order_data['order_id'],
                    'items': order_data['items']
                }
            },
            {
                'event_type': 'payment.requested',
                'correlation_id': correlation_id,
                'data': {
                    'order_id': order_data['order_id'],
                    'amount': order_data['total'],
                    'customer_id': order_data['customer_id']
                }
            }
        ]
        
        for event_to_publish in events_to_publish:
            await self.publish_event(event_to_publish)
        
        return {
            'status': 'processed',
            'events_published': len(events_to_publish)
        }

    async def handle_payment_processed(self, event: Dict, correlation_id: str) -> Dict:
        """
        Manejar evento de pago procesado
        """
        payment_data = event['data']
        
        if payment_data['status'] == 'success':
            # Publicar evento de pago exitoso
            await self.publish_event({
                'event_type': 'order.payment_confirmed',
                'correlation_id': correlation_id,
                'data': {
                    'order_id': payment_data['order_id'],
                    'transaction_id': payment_data['transaction_id'],
                    'amount': payment_data['amount']
                }
            })
        else:
            # Publicar evento de pago fallido
            await self.publish_event({
                'event_type': 'order.payment_failed',
                'correlation_id': correlation_id,
                'data': {
                    'order_id': payment_data['order_id'],
                    'error': payment_data.get('error', 'Unknown error')
                }
            })
        
        return {
            'status': 'processed',
            'payment_status': payment_data['status']
        }

    async def handle_inventory_updated(self, event: Dict, correlation_id: str) -> Dict:
        """
        Manejar evento de actualización de inventario
        """
        inventory_data = event['data']
        
        # Verificar si todos los items están disponibles
        all_available = all(item['available'] for item in inventory_data['items'])
        
        if all_available:
            await self.publish_event({
                'event_type': 'inventory.confirmed',
                'correlation_id': correlation_id,
                'data': {
                    'order_id': inventory_data['order_id'],
                    'items': inventory_data['items']
                }
            })
        else:
            await self.publish_event({
                'event_type': 'inventory.insufficient',
                'correlation_id': correlation_id,
                'data': {
                    'order_id': inventory_data['order_id'],
                    'unavailable_items': [
                        item for item in inventory_data['items'] 
                        if not item['available']
                    ]
                }
            })
        
        return {
            'status': 'processed',
            'inventory_available': all_available
        }

    async def publish_event(self, event_data: Dict[str, Any]):
        """
        Publicar evento a EventBridge
        """
        try:
            response = eventbridge.put_events(
                Entries=[
                    {
                        'Source': 'order-processing',
                        'DetailType': event_data['event_type'],
                        'Detail': json.dumps({
                            'correlation_id': event_data['correlation_id'],
                            'timestamp': datetime.utcnow().isoformat(),
                            'data': event_data['data']
                        }),
                        'EventBusName': self.event_bus
                    }
                ]
            )
            
            logger.info(f"Published event {event_data['event_type']} with correlation ID {event_data['correlation_id']}")
            
        except Exception as e:
            logger.error(f"Failed to publish event: {e}")
            raise

    async def save_order(self, order_data: Dict[str, Any]):
        """
        Guardar orden en DynamoDB
        """
        table = dynamodb.Table(os.environ['ORDERS_TABLE'])
        
        try:
            table.put_item(
                Item={
                    'order_id': order_data['order_id'],
                    'customer_id': order_data['customer_id'],
                    'items': order_data['items'],
                    'total': order_data['total'],
                    'status': 'processing',
                    'created_at': datetime.utcnow().isoformat(),
                    'updated_at': datetime.utcnow().isoformat()
                }
            )
            logger.info(f"Saved order {order_data['order_id']}")
        
        except Exception as e:
            logger.error(f"Failed to save order: {e}")
            raise

    async def handle_processing_error(self, event: Dict, correlation_id: str, error: str):
        """
        Manejar errores de procesamiento
        """
        logger.error(f"Processing error for correlation ID {correlation_id}: {error}")
        
        # Enviar evento de error
        await self.publish_event({
            'event_type': 'processing.error',
            'correlation_id': correlation_id,
            'data': {
                'original_event': event,
                'error': error,
                'timestamp': datetime.utcnow().isoformat()
            }
        })
        
        # Opcional: enviar a Dead Letter Queue
        if self.dead_letter_queue:
            sqs.send_message(
                QueueUrl=self.dead_letter_queue,
                MessageBody=json.dumps({
                    'event': event,
                    'correlation_id': correlation_id,
                    'error': error,
                    'timestamp': datetime.utcnow().isoformat()
                })
            )

# Handler principal
event_processor = EventProcessor()

def lambda_handler(event, context):
    """
    Handler principal para procesamiento de eventos
    """
    correlation_id = context.aws_request_id
    
    try:
        # Procesar múltiples eventos si es necesario
        if 'Records' in event:
            # Eventos desde SQS, SNS, etc.
            results = []
            for record in event['Records']:
                if 'eventSource' in record and record['eventSource'] == 'aws:sqs':
                    body = json.loads(record['body'])
                    result = asyncio.run(event_processor.process_event(body))
                    results.append(result)
                else:
                    # Procesar otro tipo de records
                    pass
            
            return {
                'statusCode': 200,
                'body': json.dumps({
                    'processed_events': len(results),
                    'results': results
                })
            }
        else:
            # Evento directo
            result = asyncio.run(event_processor.process_event(event))
            return {
                'statusCode': 200,
                'body': json.dumps(result)
            }
    
    except Exception as e:
        logger.error(f"Handler error: {e}")
        return {
            'statusCode': 500,
            'body': json.dumps({
                'error': 'Processing failed',
                'correlation_id': correlation_id
            })
        }

Microservicios Serverless con API Gateway

// microservices_gateway.js - Microservicios serverless
const AWS = require('aws-sdk');
const jwt = require('jsonwebtoken');

class MicroserviceHandler {
    constructor() {
        this.dynamodb = new AWS.DynamoDB.DocumentClient();
        this.sns = new AWS.SNS();
        this.serviceName = process.env.SERVICE_NAME;
        this.region = process.env.AWS_REGION;
    }

    async handleRequest(event, context) {
        console.log(`[${this.serviceName}] Processing request:`, JSON.stringify(event));
        
        try {
            // Extraer información de la solicitud
            const { httpMethod, path, pathParameters, queryStringParameters, body, headers } = event;
            const requestId = context.awsRequestId;
            
            // Validar autenticación
            const user = await this.authenticateUser(headers.Authorization);
            
            // Enrutar solicitud
            const result = await this.routeRequest({
                method: httpMethod,
                path,
                pathParameters,
                queryStringParameters,
                body: body ? JSON.parse(body) : null,
                user,
                requestId
            });
            
            return this.createResponse(200, result);
        } catch (error) {
            console.error(`[${this.serviceName}] Error:`, error);
            return this.createErrorResponse(error);
        }
    }

    async authenticateUser(authorizationHeader) {
        if (!authorizationHeader || !authorizationHeader.startsWith('Bearer ')) {
            throw new UnauthorizedError('Missing or invalid authorization header');
        }
        
        const token = authorizationHeader.substring(7);
        
        try {
            // Verificar JWT (en producción, usar clave desde Secrets Manager)
            const decoded = jwt.verify(token, process.env.JWT_SECRET);
            return {
                userId: decoded.sub,
                email: decoded.email,
                roles: decoded.roles || []
            };
        } catch (error) {
            throw new UnauthorizedError('Invalid token');
        }
    }

    async routeRequest({ method, path, pathParameters, queryStringParameters, body, user, requestId }) {
        const route = `${method} ${path}`;
        
        switch (route) {
            case 'GET /users':
                return await this.getUsers(queryStringParameters, user);
            case 'GET /users/{id}':
                return await this.getUser(pathParameters.id, user);
            case 'POST /users':
                return await this.createUser(body, user, requestId);
            case 'PUT /users/{id}':
                return await this.updateUser(pathParameters.id, body, user, requestId);
            case 'DELETE /users/{id}':
                return await this.deleteUser(pathParameters.id, user, requestId);
            default:
                throw new NotFoundError(`Route not found: ${route}`);
        }
    }

    async getUsers(queryParams, user) {
        // Verificar permisos
        if (!user.roles.includes('admin') && !user.roles.includes('user-read')) {
            throw new ForbiddenError('Insufficient permissions');
        }
        
        const { limit = 50, offset = 0, filter } = queryParams || {};
        
        const params = {
            TableName: process.env.USERS_TABLE,
            Limit: Math.min(parseInt(limit), 100), // Máximo 100
            ExclusiveStartKey: offset ? { userId: offset } : undefined
        };
        
        // Agregar filtros si es necesario
        if (filter) {
            params.FilterExpression = 'contains(#name, :filter) OR contains(email, :filter)';
            params.ExpressionAttributeNames = { '#name': 'name' };
            params.ExpressionAttributeValues = { ':filter': filter };
        }
        
        const result = await this.dynamodb.scan(params).promise();
        
        return {
            users: result.Items.map(user => this.sanitizeUser(user)),
            count: result.Count,
            nextOffset: result.LastEvaluatedKey ? result.LastEvaluatedKey.userId : null
        };
    }

    async getUser(userId, user) {
        // Los usuarios solo pueden ver su propia información, excepto admins
        if (userId !== user.userId && !user.roles.includes('admin')) {
            throw new ForbiddenError('Access denied');
        }
        
        const params = {
            TableName: process.env.USERS_TABLE,
            Key: { userId }
        };
        
        const result = await this.dynamodb.get(params).promise();
        
        if (!result.Item) {
            throw new NotFoundError('User not found');
        }
        
        return {
            user: this.sanitizeUser(result.Item)
        };
    }

    async createUser(userData, requestingUser, requestId) {
        // Solo admins pueden crear usuarios
        if (!requestingUser.roles.includes('admin')) {
            throw new ForbiddenError('Only administrators can create users');
        }
        
        // Validar datos
        this.validateUserData(userData);
        
        const newUser = {
            userId: this.generateUserId(),
            name: userData.name,
            email: userData.email,
            roles: userData.roles || ['user'],
            status: 'active',
            createdAt: new Date().toISOString(),
            updatedAt: new Date().toISOString(),
            createdBy: requestingUser.userId
        };
        
        // Verificar que email no exista
        const existingUser = await this.getUserByEmail(userData.email);
        if (existingUser) {
            throw new ConflictError('User with this email already exists');
        }
        
        // Crear usuario
        await this.dynamodb.put({
            TableName: process.env.USERS_TABLE,
            Item: newUser,
            ConditionExpression: 'attribute_not_exists(userId)'
        }).promise();
        
        // Publicar evento
        await this.publishUserEvent('user.created', newUser, requestId);
        
        return {
            user: this.sanitizeUser(newUser),
            message: 'User created successfully'
        };
    }

    async updateUser(userId, updates, requestingUser, requestId) {
        // Los usuarios pueden actualizar su propia información, admins pueden actualizar cualquiera
        if (userId !== requestingUser.userId && !requestingUser.roles.includes('admin')) {
            throw new ForbiddenError('Access denied');
        }
        
        // Validar datos
        this.validateUserUpdates(updates);
        
        // Construir expresión de actualización
        const updateExpression = [];
        const expressionAttributeNames = {};
        const expressionAttributeValues = {};
        
        if (updates.name) {
            updateExpression.push('#name = :name');
            expressionAttributeNames['#name'] = 'name';
            expressionAttributeValues[':name'] = updates.name;
        }
        
        if (updates.email) {
            // Verificar que email no esté en uso
            const existingUser = await this.getUserByEmail(updates.email);
            if (existingUser && existingUser.userId !== userId) {
                throw new ConflictError('Email already in use');
            }
            
            updateExpression.push('email = :email');
            expressionAttributeValues[':email'] = updates.email;
        }
        
        if (updates.roles && requestingUser.roles.includes('admin')) {
            updateExpression.push('roles = :roles');
            expressionAttributeValues[':roles'] = updates.roles;
        }
        
        updateExpression.push('updatedAt = :updatedAt');
        expressionAttributeValues[':updatedAt'] = new Date().toISOString();
        
        const params = {
            TableName: process.env.USERS_TABLE,
            Key: { userId },
            UpdateExpression: `SET ${updateExpression.join(', ')}`,
            ExpressionAttributeNames: Object.keys(expressionAttributeNames).length > 0 ? expressionAttributeNames : undefined,
            ExpressionAttributeValues: expressionAttributeValues,
            ReturnValues: 'ALL_NEW',
            ConditionExpression: 'attribute_exists(userId)'
        };
        
        const result = await this.dynamodb.update(params).promise();
        
        // Publicar evento
        await this.publishUserEvent('user.updated', result.Attributes, requestId);
        
        return {
            user: this.sanitizeUser(result.Attributes),
            message: 'User updated successfully'
        };
    }

    async deleteUser(userId, requestingUser, requestId) {
        // Solo admins pueden eliminar usuarios
        if (!requestingUser.roles.includes('admin')) {
            throw new ForbiddenError('Only administrators can delete users');
        }
        
        // No permitir auto-eliminación
        if (userId === requestingUser.userId) {
            throw new ConflictError('Cannot delete own account');
        }
        
        const params = {
            TableName: process.env.USERS_TABLE,
            Key: { userId },
            ReturnValues: 'ALL_OLD',
            ConditionExpression: 'attribute_exists(userId)'
        };
        
        const result = await this.dynamodb.delete(params).promise();
        
        if (!result.Attributes) {
            throw new NotFoundError('User not found');
        }
        
        // Publicar evento
        await this.publishUserEvent('user.deleted', result.Attributes, requestId);
        
        return {
            message: 'User deleted successfully',
            deletedUser: this.sanitizeUser(result.Attributes)
        };
    }

    async getUserByEmail(email) {
        // En producción, usar GSI para búsquedas por email
        const params = {
            TableName: process.env.USERS_TABLE,
            IndexName: 'EmailIndex',
            KeyConditionExpression: 'email = :email',
            ExpressionAttributeValues: {
                ':email': email
            }
        };
        
        const result = await this.dynamodb.query(params).promise();
        return result.Items.length > 0 ? result.Items[0] : null;
    }

    validateUserData(userData) {
        if (!userData.name || userData.name.trim().length 2) {
            throw new ValidationError('Name must be at least 2 characters long');
        }
        
        if (!userData.email || !this.isValidEmail(userData.email)) {
            throw new ValidationError('Valid email is required');
        }
        
        if (userData.roles && !Array.isArray(userData.roles)) {
            throw new ValidationError('Roles must be an array');
        }
    }

    validateUserUpdates(updates) {
        if (updates.name && updates.name.trim().length 2) {
            throw new ValidationError('Name must be at least 2 characters long');
        }
        
        if (updates.email && !this.isValidEmail(updates.email)) {
            throw new ValidationError('Valid email is required');
        }
        
        if (updates.roles && !Array.isArray(updates.roles)) {
            throw new ValidationError('Roles must be an array');
        }
    }

    isValidEmail(email) {
        const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
        return emailRegex.test(email);
    }

    sanitizeUser(user) {
        // Remover información sensible antes de devolver al cliente
        const { password, ...sanitizedUser } = user;
        return sanitizedUser;
    }

    generateUserId() {
        return 'user_' + Date.now() + '_' + Math.random().toString(36).substr(2, 9);
    }

    async publishUserEvent(eventType, userData, requestId) {
        try {
            const message = {
                eventType,
                userId: userData.userId,
                timestamp: new Date().toISOString(),
                requestId,
                data: this.sanitizeUser(userData)
            };
            
            await this.sns.publish({
                TopicArn: process.env.USER_EVENTS_TOPIC,
                Message: JSON.stringify(message),
                MessageAttributes: {
                    eventType: {
                        DataType: 'String',
                        StringValue: eventType
                    }
                }
            }).promise();
            
            console.log(`Published event: ${eventType} for user ${userData.userId}`);
        } catch (error) {
            console.error('Failed to publish event:', error);
            // No fallar la operación principal por un error de evento
        }
    }

    createResponse(statusCode, data, headers = {}) {
        return {
            statusCode,
            headers: {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Origin': '*',
                'Access-Control-Allow-Headers': 'Content-Type,Authorization',
                'Access-Control-Allow-Methods': 'GET,POST,PUT,DELETE,OPTIONS',
                ...headers
            },
            body: JSON.stringify(data)
        };
    }

    createErrorResponse(error) {
        let statusCode = 500;
        let message = 'Internal server error';
        
        if (error instanceof ValidationError) {
            statusCode = 400;
            message = error.message;
        } else if (error instanceof UnauthorizedError) {
            statusCode = 401;
            message = error.message;
        } else if (error instanceof ForbiddenError) {
            statusCode = 403;
            message = error.message;
        } else if (error instanceof NotFoundError) {
            statusCode = 404;
            message = error.message;
        } else if (error instanceof ConflictError) {
            statusCode = 409;
            message = error.message;
        }
        
        return this.createResponse(statusCode, { error: message });
    }
}

// Custom Error Classes
class ValidationError extends Error {
    constructor(message) {
        super(message);
        this.name = 'ValidationError';
    }
}

class UnauthorizedError extends Error {
    constructor(message) {
        super(message);
        this.name = 'UnauthorizedError';
    }
}

class ForbiddenError extends Error {
    constructor(message) {
        super(message);
        this.name = 'ForbiddenError';
    }
}

class NotFoundError extends Error {
    constructor(message) {
        super(message);
        this.name = 'NotFoundError';
    }
}

class ConflictError extends Error {
    constructor(message) {
        super(message);
        this.name = 'ConflictError';
    }
}

// Crear instancia del handler
const handler = new MicroserviceHandler();

// Export para Lambda
exports.handler = async (event, context) => {
    return await handler.handleRequest(event, context);
};

Mejores Prácticas y Conclusiones

Checklist de Mejores Prácticas

✅ Arquitectura y Diseño

  • Mantener funciones pequeñas y enfocadas (principio de responsabilidad única)
  • Usar arquitecturas event-driven para desacoplar componentes
  • Implementar idempotencia para operaciones críticas
  • Diseñar para fallos (circuit breakers, retries, dead letter queues)

✅ Rendimiento

  • Reutilizar conexiones fuera del handler
  • Usar Graviton2 (ARM) para mejor relación rendimiento/precio
  • Configurar memoria apropiada basada en benchmarks
  • Implementar concurrencia aprovisionada para funciones críticas

✅ Seguridad

  • Aplicar principio de mínimo privilegio en IAM roles
  • Usar AWS Secrets Manager/Parameter Store para credenciales
  • Implementar cifrado en tránsito y en reposo
  • Configurar VPC cuando sea necesario para recursos privados

✅ Monitoreo y Observabilidad

  • Implementar logging estructurado
  • Configurar métricas personalizadas
  • Usar X-Ray tracing para operaciones complejas
  • Establecer alarmas proactivas

✅ Costos

  • Monitorear costos regularmente
  • Optimizar configuración de memoria/CPU
  • Usar arquitectura ARM cuando sea posible
  • Implementar lifecycle policies para logs

Roadmap de Adopción Serverless

  1. Fase 1: Exploración (2-4 semanas)

    • Capacitación del equipo
    • Desarrollo de POCs simples
    • Establecimiento de herramientas
  2. Fase 2: Implementación Piloto (1-2 meses)

    • Migración de funciones no críticas
    • Establecimiento de CI/CD pipelines
    • Implementación de monitoreo básico
  3. Fase 3: Escalamiento (3-6 meses)

    • Migración de aplicaciones críticas
    • Optimización de rendimiento y costos
    • Implementación de governance
  4. Fase 4: Madurez (6+ meses)

    • Automatización completa
    • Optimización continua
    • Innovación y nuevos casos de uso

El Futuro de Serverless

AWS Lambda continúa evolucionando con innovaciones como:

  • Mejores cold start times con optimizaciones de runtime
  • Soporte para más lenguajes y runtime personalizados
  • Integración mejorada con servicios de IA/ML
  • Capacidades de edge computing con Lambda@Edge
  • Herramientas de desarrollo más sofisticadas

Recursos y Referencias

Documentación Oficial:

Herramientas Recomendadas:

Patterns y Arquitecturas:

AWS Lambda ha transformado la forma en que desarrollamos aplicaciones, ofreciendo una abstracción poderosa que permite a los desarrolladores centrarse en el código de negocio mientras AWS maneja la infraestructura subyacente. Con las estrategias, patrones y mejores prácticas presentadas en esta guía, estarás preparado para aprovechar al máximo el potencial de la computación serverless en tus proyectos.