Infraestructura Web3: La Revolución DevOps del Internet Descentralizado

La infraestructura Web3 representa la mayor transformación en arquitectura de sistemas distribuidos desde el surgimiento de la computación en la nube. Esta nueva era tecnológica combina blockchain, contratos inteligentes, almacenamiento descentralizado y redes peer-to-peer para crear un ecosistema donde los usuarios poseen verdaderamente sus datos, identidad y activos digitales.

Para los profesionales DevOps, Web3 presenta desafíos únicos: gestionar nodos blockchain, desplegar contratos inmutables, orquestar sistemas descentralizados y garantizar la seguridad en un entorno trustless. Esta guía completa te equipará con el conocimiento y herramientas necesarias para construir, desplegar y mantener infraestructura Web3 robusta.

Fundamentos de la Arquitectura Web3

Evolución del Stack Tecnológico Web

La transición hacia Web3 representa un cambio fundamental en la arquitectura de sistemas:

WEBLE1CHSFC.TTeTG0UMrPIRLv/(AiHs1edTc9SsoTr9OtrPi0Láep-Otst2is0c0o0E)VOLUCWIEÓBLNE2CABCMSW.TPalooE0UIsobcBRseuii(Asdla2/Rel0EEd-0SSafm0CTtie-Rord2Issi0Tta2.0)WEBPR3OBSdDD.PlmAeA0IoapFOEcrpis(Dkts/2AcN0DhCF2/aoT0Vins+Ant)LrU.E

Stack Tecnológico Web3 Completo

La infraestructura Web3 se compone de múltiples capas interconectadas:

dWSLEIAamatPpRWlMWLaSVRyOAzhMCEFDCPpeAleaeroyueprkeaoVSioisaGetldtlpsrtbSrinMsnncMtalgietiiyensttaFtIsMeeCdr2mtnuneretr/atroi(ircmeninaNRsCntSsutsbt/neakTtyommuu-Ftxinrrlstaletnneaaeden.bezcnddedjocotarkswtrs)essEACSBLMAAPCSLAMPAoIPPrAiOlAePbRWNIAMtAoDDNLdPxACtSANCrAFCLieaTslohteAFAeoDvKsoveEmrieIlalE/criDoFOTBclaaClaaNawlrCeclRSharEcisIhyilHalrAneeaAteFDeldoMLagaAnaMeacmCAtAKmiwAlaIiGnIanInvoiIpNCZysePorDnnncNcEteicOpaobLskAsohhNnNstnIeDseLeTEinCtiAOSveAssYecCEtIRÓNWESSCCexttrobVWpETDaEEOaLRCoPCLmRAGuelthunRIptiaosooapekoIebohendCPeeginslsyunaln.3reea-nhdn-kmetdsetjMarGr2sZCteecaoreehmesdsrAd0tehnnxhdsZrrdocans/apaitaoeNfAarapa7npnnitrNeanlenhl2dengnoetcg/sy1aletweuCt/rilwosloi1dnsoranc1srkrns5ke5ctKit

Configuración de Nodos Blockchain

Arquitectura de Nodos Ethereum

Los nodos blockchain son la columna vertebral de Web3. Para Ethereum, existen varios tipos de nodos según sus necesidades:

1. Clientes de Ejecución

# Geth (Go Ethereum) - El cliente más popular
docker run -d \
  --name ethereum-geth \
  --restart unless-stopped \
  -v ethereum-data:/root/.ethereum \
  -p 8545:8545 \
  -p 8546:8546 \
  -p 30303:30303 \
  ethereum/client-go:stable \
    --http \
    --http.addr 0.0.0.0 \
    --http.port 8545 \
    --http.corsdomain "*" \
    --http.api eth,net,web3,txpool,debug \
    --ws \
    --ws.addr 0.0.0.0 \
    --ws.port 8546 \
    --ws.origins "*" \
    --ws.api eth,net,web3,txpool,debug \
    --metrics \
    --metrics.addr 0.0.0.0 \
    --metrics.port 6060 \
    --syncmode snap \
    --cache 4096 \
    --maxpeers 50 \
    --nat extip:$(curl -s ifconfig.me)

2. Clientes de Consenso (Post-Merge)

# Prysm Beacon Chain
docker run -d \
  --name ethereum-beacon \
  --restart unless-stopped \
  -v beacon-data:/data \
  -p 4000:4000 \
  -p 13000:13000 \
  -p 12000:12000/udp \
  gcr.io/prysmaticlabs/prysm/beacon-chain:stable \
    --datadir=/data \
    --rpc-host=0.0.0.0 \
    --rpc-port=4000 \
    --grpc-gateway-host=0.0.0.0 \
    --grpc-gateway-port=3500 \
    --monitoring-host=0.0.0.0 \
    --monitoring-port=8080 \
    --p2p-host-ip=$(curl -s ifconfig.me) \
    --execution-endpoint=http://geth:8551 \
    --jwt-secret=/data/jwt.hex \
    --accept-terms-of-use

Stack de Monitoreo Completo

Docker Compose para Infraestructura Completa

# docker-compose.yml - Stack completo de nodo Ethereum con monitoreo
version: '3.8'

services:
  # Cliente de ejecución
  geth:
    image: ethereum/client-go:stable
    container_name: ethereum-geth
    restart: unless-stopped
    volumes:
      - geth-data:/root/.ethereum
      - ./jwt.hex:/root/.ethereum/jwt.hex
    ports:
      - "8545:8545"   # HTTP RPC
      - "8546:8546"   # WebSocket RPC
      - "30303:30303" # P2P
      - "6060:6060"   # Metrics
    command: >
      --http --http.addr=0.0.0.0 --http.port=8545 
      --http.corsdomain="*" --http.vhosts="*"
      --http.api=eth,net,web3,txpool,debug,admin
      --ws --ws.addr=0.0.0.0 --ws.port=8546 
      --ws.origins="*" --ws.api=eth,net,web3,txpool,debug
      --authrpc.addr=0.0.0.0 --authrpc.port=8551
      --authrpc.jwtsecret=/root/.ethereum/jwt.hex
      --metrics --metrics.addr=0.0.0.0 --metrics.port=6060
      --syncmode=snap --gcmode=archive
      --cache=4096 --maxpeers=50
    networks:
      - ethereum-net

  # Cliente de consenso
  beacon:
    image: gcr.io/prysmaticlabs/prysm/beacon-chain:stable
    container_name: ethereum-beacon
    restart: unless-stopped
    depends_on:
      - geth
    volumes:
      - beacon-data:/data
      - ./jwt.hex:/data/jwt.hex
    ports:
      - "4000:4000"     # gRPC
      - "3500:3500"     # Gateway
      - "13000:13000"   # P2P
      - "12000:12000/udp" # Discovery
      - "8080:8080"     # Monitoring
    command: >
      --datadir=/data
      --rpc-host=0.0.0.0 --rpc-port=4000
      --grpc-gateway-host=0.0.0.0 --grpc-gateway-port=3500
      --monitoring-host=0.0.0.0 --monitoring-port=8080
      --execution-endpoint=http://geth:8551
      --jwt-secret=/data/jwt.hex
      --accept-terms-of-use
      --checkpoint-sync-url=https://beaconstate.ethstaker.cc
      --genesis-beacon-api-url=https://beaconstate.ethstaker.cc
    networks:
      - ethereum-net

  # Validador (opcional, para staking)
  validator:
    image: gcr.io/prysmaticlabs/prysm/validator:stable
    container_name: ethereum-validator
    restart: unless-stopped
    depends_on:
      - beacon
    volumes:
      - validator-data:/data
      - ./validator-keys:/validator-keys
    command: >
      --datadir=/data
      --beacon-rpc-provider=beacon:4000
      --wallet-dir=/data/wallets
      --wallet-password-file=/data/wallet-password.txt
      --accept-terms-of-use
      --monitoring-host=0.0.0.0
      --monitoring-port=8081
    networks:
      - ethereum-net
    profiles:
      - validator

  # Monitoreo con Prometheus
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    restart: unless-stopped
    volumes:
      - ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
      - ./monitoring/rules:/etc/prometheus/rules
      - prometheus-data:/prometheus
    ports:
      - "9090:9090"
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
      - '--storage.tsdb.retention.time=90d'
      - '--web.enable-lifecycle'
    networks:
      - ethereum-net

  # Dashboard con Grafana
  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    restart: unless-stopped
    depends_on:
      - prometheus
    volumes:
      - grafana-data:/var/lib/grafana
      - ./monitoring/grafana/dashboards:/etc/grafana/provisioning/dashboards
      - ./monitoring/grafana/datasources:/etc/grafana/provisioning/datasources
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
      - GF_USERS_ALLOW_SIGN_UP=false
      - GF_INSTALL_PLUGINS=grafana-piechart-panel
    networks:
      - ethereum-net

  # Node exporter para métricas del sistema
  node-exporter:
    image: prom/node-exporter:latest
    container_name: node-exporter
    restart: unless-stopped
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/host/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.rootfs=/host/rootfs'
      - '--path.sysfs=/host/sys'
      - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
    ports:
      - "9100:9100"
    networks:
      - ethereum-net

  # Reverse proxy con nginx
  nginx:
    image: nginx:alpine
    container_name: ethereum-proxy
    restart: unless-stopped
    depends_on:
      - geth
      - grafana
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./ssl:/etc/ssl/certs
    ports:
      - "80:80"
      - "443:443"
    networks:
      - ethereum-net

networks:
  ethereum-net:
    driver: bridge

volumes:
  geth-data:
  beacon-data:
  validator-data:
  prometheus-data:
  grafana-data:

Configuración de Prometheus

# monitoring/prometheus.yml
global:
  scrape_interval: 15s
  evaluation_interval: 15s

rule_files:
  - "/etc/prometheus/rules/*.yml"

scrape_configs:
  # Geth metrics
  - job_name: 'geth'
    static_configs:
      - targets: ['geth:6060']
    scrape_interval: 10s
    metrics_path: /debug/metrics/prometheus

  # Beacon chain metrics
  - job_name: 'beacon-chain'
    static_configs:
      - targets: ['beacon:8080']
    scrape_interval: 10s

  # Validator metrics
  - job_name: 'validator'
    static_configs:
      - targets: ['validator:8081']
    scrape_interval: 10s

  # System metrics
  - job_name: 'node-exporter'
    static_configs:
      - targets: ['node-exporter:9100']
    scrape_interval: 15s

  # Prometheus self-monitoring
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

alerting:
  alertmanagers:
    - static_configs:
        - targets:
          - alertmanager:9093

Reglas de Alertas

# monitoring/rules/ethereum-alerts.yml
groups:
- name: ethereum-node
  rules:
  
  # Nodo desconectado
  - alert: NodeDown
    expr: up{job=~"geth|beacon-chain"} == 0
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: "Ethereum node {{ $labels.instance }} is down"
      description: "Node {{ $labels.instance }} has been down for more than 5 minutes"

  # Nodo fuera de sincronización
  - alert: NodeOutOfSync
    expr: |
      (
        ethereum_chain_head_block - 
        on() group_right() 
        max(ethereum_chain_head_block)
      ) -100
    for: 10m
    labels:
      severity: warning
    annotations:
      summary: "Node is out of sync"
      description: "Node is {{ $value }} blocks behind"

  # Uso alto de disco
  - alert: HighDiskUsage
    expr: |
      (
        node_filesystem_size_bytes{fstype!="tmpfs"} - 
        node_filesystem_free_bytes{fstype!="tmpfs"}
      ) / node_filesystem_size_bytes{fstype!="tmpfs"} > 0.85
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "High disk usage on {{ $labels.instance }}"
      description: "Disk usage is {{ $value | humanizePercentage }}"

  # Pocos peers conectados
  - alert: LowPeerCount
    expr: ethereum_p2p_peers 5
    for: 10m
    labels:
      severity: warning
    annotations:
      summary: "Low peer count on {{ $labels.instance }}"
      description: "Only {{ $value }} peers connected"

  # Alto uso de memoria
  - alert: HighMemoryUsage
    expr: |
      (
        node_memory_MemTotal_bytes - 
        node_memory_MemAvailable_bytes
      ) / node_memory_MemTotal_bytes > 0.9
    for: 5m
    labels:
      severity: critical
    annotations:
      summary: "High memory usage on {{ $labels.instance }}"
      description: "Memory usage is {{ $value | humanizePercentage }}"

Desarrollo y Despliegue de Smart Contracts

Entorno de Desarrollo con Hardhat

Configuración del Proyecto

// hardhat.config.js - Configuración completa de Hardhat
require("@nomicfoundation/hardhat-toolbox");
require("@openzeppelin/hardhat-upgrades");
require("hardhat-gas-reporter");
require("solidity-coverage");
require("hardhat-contract-sizer");

const dotenv = require("dotenv");
dotenv.config();

/** @type import('hardhat/config').HardhatUserConfig */
module.exports = {
  solidity: {
    version: "0.8.20",
    settings: {
      optimizer: {
        enabled: true,
        runs: 200,
      },
      viaIR: true, // Mejorar optimización para contratos grandes
    },
  },

  networks: {
    // Red local
    localhost: {
      url: "http://127.0.0.1:8545",
      chainId: 31337,
    },

    // Testnets
    sepolia: {
      url: `https://sepolia.infura.io/v3/${process.env.INFURA_API_KEY}`,
      accounts: [process.env.PRIVATE_KEY_TESTNET],
      chainId: 11155111,
      gasPrice: 20000000000, // 20 gwei
    },

    goerli: {
      url: `https://goerli.infura.io/v3/${process.env.INFURA_API_KEY}`,
      accounts: [process.env.PRIVATE_KEY_TESTNET],
      chainId: 5,
      gasPrice: "auto",
    },

    // Mainnet (usar con precaución)
    mainnet: {
      url: `https://mainnet.infura.io/v3/${process.env.INFURA_API_KEY}`,
      accounts: [process.env.PRIVATE_KEY_MAINNET],
      chainId: 1,
      gasPrice: "auto",
    },

    // Layer 2
    optimism: {
      url: "https://mainnet.optimism.io",
      accounts: [process.env.PRIVATE_KEY_MAINNET],
      chainId: 10,
    },

    arbitrum: {
      url: "https://arb1.arbitrum.io/rpc",
      accounts: [process.env.PRIVATE_KEY_MAINNET],
      chainId: 42161,
    },

    polygon: {
      url: "https://polygon-rpc.com",
      accounts: [process.env.PRIVATE_KEY_MAINNET],
      chainId: 137,
      gasPrice: 30000000000, // 30 gwei
    },
  },

  // Verificación en exploradores
  etherscan: {
    apiKey: {
      mainnet: process.env.ETHERSCAN_API_KEY,
      sepolia: process.env.ETHERSCAN_API_KEY,
      goerli: process.env.ETHERSCAN_API_KEY,
      optimisticEthereum: process.env.OPTIMISTIC_ETHERSCAN_API_KEY,
      arbitrumOne: process.env.ARBISCAN_API_KEY,
      polygon: process.env.POLYGONSCAN_API_KEY,
    },
  },

  // Reportes de gas
  gasReporter: {
    enabled: process.env.REPORT_GAS !== undefined,
    currency: "USD",
    outputFile: "gas-report.txt",
    noColors: true,
    coinmarketcap: process.env.COINMARKETCAP_API_KEY,
  },

  // Coverage
  mocha: {
    timeout: 100000000,
  },

  // Contract sizer
  contractSizer: {
    alphaSort: true,
    disambiguatePaths: false,
    runOnCompile: true,
    strict: true,
    only: [],
  },
};

Smart Contract Avanzado con Seguridad

// contracts/SecureProtocol.sol - Contrato con todas las mejores prácticas
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;

import "@openzeppelin/contracts-upgradeable/proxy/utils/Initializable.sol";
import "@openzeppelin/contracts-upgradeable/access/OwnableUpgradeable.sol";
import "@openzeppelin/contracts-upgradeable/security/PausableUpgradeable.sol";
import "@openzeppelin/contracts-upgradeable/security/ReentrancyGuardUpgradeable.sol";
import "@openzeppelin/contracts-upgradeable/utils/cryptography/EIP712Upgradeable.sol";
import "@openzeppelin/contracts/token/ERC20/utils/SafeERC20.sol";
import "@openzeppelin/contracts/utils/math/SafeMath.sol";

/**
 * @title SecureProtocol
 * @dev Protocolo DeFi seguro con todas las mejores prácticas
 */
contract SecureProtocol is 
    Initializable,
    OwnableUpgradeable,
    PausableUpgradeable,
    ReentrancyGuardUpgradeable,
    EIP712Upgradeable
{
    using SafeERC20 for IERC20;
    using SafeMath for uint256;

    // ============ State Variables ============
    
    struct UserDeposit {
        uint256 amount;
        uint256 timestamp;
        uint256 lockPeriod;
        bool claimed;
    }

    mapping(address => UserDeposit[]) public userDeposits;
    mapping(address => bool) public authorizedTokens;
    mapping(address => uint256) public tokenMinAmounts;
    
    uint256 public totalValueLocked;
    uint256 public rewardRate; // Basis points (10000 = 100%)
    uint256 public minimumLockPeriod;
    uint256 public maximumLockPeriod;
    
    address public treasury;
    address public emergencyAdmin;

    // ============ Events ============
    
    event Deposited(
        address indexed user,
        address indexed token,
        uint256 amount,
        uint256 lockPeriod,
        uint256 depositId
    );
    
    event Withdrawn(
        address indexed user,
        address indexed token,
        uint256 amount,
        uint256 reward,
        uint256 depositId
    );
    
    event TokenAuthorized(address indexed token, uint256 minAmount);
    event TokenDeauthorized(address indexed token);
    event EmergencyWithdraw(address indexed user, address indexed token, uint256 amount);
    event ParametersUpdated(uint256 rewardRate, uint256 minLock, uint256 maxLock);

    // ============ Errors ============
    
    error UnauthorizedToken();
    error InsufficientAmount();
    error InvalidLockPeriod();
    error DepositNotMatured();
    error AlreadyClaimed();
    error NoDepositsFound();
    error TransferFailed();

    // ============ Modifiers ============
    
    modifier onlyEmergencyAdmin() {
        require(msg.sender == emergencyAdmin, "Not emergency admin");
        _;
    }

    modifier validToken(address token) {
        if (!authorizedTokens[token]) revert UnauthorizedToken();
        _;
    }

    // ============ Initialization ============
    
    /// @custom:oz-upgrades-unsafe-allow constructor
    constructor() {
        _disableInitializers();
    }

    function initialize(
        address _initialOwner,
        address _treasury,
        address _emergencyAdmin,
        uint256 _rewardRate,
        uint256 _minLockPeriod,
        uint256 _maxLockPeriod
    ) public initializer {
        require(_initialOwner != address(0), "Invalid owner");
        require(_treasury != address(0), "Invalid treasury");
        require(_emergencyAdmin != address(0), "Invalid emergency admin");
        require(_rewardRate <= 10000, "Invalid reward rate"); // Max 100%
        require(_minLockPeriod _maxLockPeriod, "Invalid lock periods");

        __Ownable_init(_initialOwner);
        __Pausable_init();
        __ReentrancyGuard_init();
        __EIP712_init("SecureProtocol", "1");

        treasury = _treasury;
        emergencyAdmin = _emergencyAdmin;
        rewardRate = _rewardRate;
        minimumLockPeriod = _minLockPeriod;
        maximumLockPeriod = _maxLockPeriod;
    }

    // ============ Core Functions ============
    
    /**
     * @dev Deposit tokens with lock period
     */
    function deposit(
        address token,
        uint256 amount,
        uint256 lockPeriod
    ) external nonReentrant whenNotPaused validToken(token) {
        if (amount tokenMinAmounts[token]) revert InsufficientAmount();
        if (lockPeriod minimumLockPeriod || lockPeriod > maximumLockPeriod) {
            revert InvalidLockPeriod();
        }

        // Transfer tokens from user
        IERC20(token).safeTransferFrom(msg.sender, address(this), amount);

        // Record deposit
        userDeposits[msg.sender].push(UserDeposit({
            amount: amount,
            timestamp: block.timestamp,
            lockPeriod: lockPeriod,
            claimed: false
        }));

        // Update TVL
        totalValueLocked = totalValueLocked.add(amount);

        emit Deposited(
            msg.sender,
            token,
            amount,
            lockPeriod,
            userDeposits[msg.sender].length - 1
        );
    }

    /**
     * @dev Withdraw deposit with rewards after lock period
     */
    function withdraw(
        address token,
        uint256 depositId
    ) external nonReentrant whenNotPaused validToken(token) {
        UserDeposit storage userDeposit = userDeposits[msg.sender][depositId];
        
        if (userDeposit.amount == 0) revert NoDepositsFound();
        if (userDeposit.claimed) revert AlreadyClaimed();
        if (block.timestamp userDeposit.timestamp + userDeposit.lockPeriod) {
            revert DepositNotMatured();
        }

        uint256 principal = userDeposit.amount;
        uint256 reward = calculateReward(principal, userDeposit.lockPeriod);
        
        // Mark as claimed
        userDeposit.claimed = true;
        
        // Update TVL
        totalValueLocked = totalValueLocked.sub(principal);

        // Transfer principal + rewards
        IERC20(token).safeTransfer(msg.sender, principal);
        if (reward > 0) {
            IERC20(token).safeTransferFrom(treasury, msg.sender, reward);
        }

        emit Withdrawn(msg.sender, token, principal, reward, depositId);
    }

    /**
     * @dev Calculate reward based on amount and lock period
     */
    function calculateReward(
        uint256 amount,
        uint256 lockPeriod
    ) public view returns (uint256) {
        // Reward increases with lock period (linear)
        uint256 periodMultiplier = lockPeriod.mul(10000).div(maximumLockPeriod);
        uint256 finalRate = rewardRate.mul(periodMultiplier).div(10000);
        
        return amount.mul(finalRate).div(10000);
    }

    /**
     * @dev Get user's deposit count
     */
    function getUserDepositCount(address user) external view returns (uint256) {
        return userDeposits[user].length;
    }

    /**
     * @dev Get user's specific deposit
     */
    function getUserDeposit(
        address user,
        uint256 depositId
    ) external view returns (UserDeposit memory) {
        return userDeposits[user][depositId];
    }

    /**
     * @dev Check if deposit can be withdrawn
     */
    function canWithdraw(
        address user,
        uint256 depositId
    ) external view returns (bool) {
        UserDeposit storage userDeposit = userDeposits[user][depositId];
        return !userDeposit.claimed && 
               block.timestamp >= userDeposit.timestamp + userDeposit.lockPeriod;
    }

    // ============ Admin Functions ============
    
    /**
     * @dev Authorize a token for deposits
     */
    function authorizeToken(
        address token,
        uint256 minAmount
    ) external onlyOwner {
        require(token != address(0), "Invalid token");
        authorizedTokens[token] = true;
        tokenMinAmounts[token] = minAmount;
        emit TokenAuthorized(token, minAmount);
    }

    /**
     * @dev Deauthorize a token
     */
    function deauthorizeToken(address token) external onlyOwner {
        authorizedTokens[token] = false;
        emit TokenDeauthorized(token);
    }

    /**
     * @dev Update protocol parameters
     */
    function updateParameters(
        uint256 _rewardRate,
        uint256 _minLockPeriod,
        uint256 _maxLockPeriod
    ) external onlyOwner {
        require(_rewardRate <= 10000, "Invalid reward rate");
        require(_minLockPeriod _maxLockPeriod, "Invalid lock periods");
        
        rewardRate = _rewardRate;
        minimumLockPeriod = _minLockPeriod;
        maximumLockPeriod = _maxLockPeriod;
        
        emit ParametersUpdated(_rewardRate, _minLockPeriod, _maxLockPeriod);
    }

    /**
     * @dev Update treasury address
     */
    function updateTreasury(address newTreasury) external onlyOwner {
        require(newTreasury != address(0), "Invalid treasury");
        treasury = newTreasury;
    }

    /**
     * @dev Update emergency admin
     */
    function updateEmergencyAdmin(address newAdmin) external onlyOwner {
        require(newAdmin != address(0), "Invalid admin");
        emergencyAdmin = newAdmin;
    }

    // ============ Emergency Functions ============
    
    /**
     * @dev Pause the protocol
     */
    function pause() external onlyOwner {
        _pause();
    }

    /**
     * @dev Unpause the protocol
     */
    function unpause() external onlyOwner {
        _unpause();
    }

    /**
     * @dev Emergency withdrawal (only in paused state)
     */
    function emergencyWithdraw(
        address token,
        uint256 depositId
    ) external nonReentrant whenPaused {
        UserDeposit storage userDeposit = userDeposits[msg.sender][depositId];
        
        if (userDeposit.amount == 0) revert NoDepositsFound();
        if (userDeposit.claimed) revert AlreadyClaimed();

        uint256 amount = userDeposit.amount;
        userDeposit.claimed = true;
        
        // Update TVL
        totalValueLocked = totalValueLocked.sub(amount);

        // Transfer only principal (no rewards in emergency)
        IERC20(token).safeTransfer(msg.sender, amount);

        emit EmergencyWithdraw(msg.sender, token, amount);
    }

    // ============ View Functions ============
    
    /**
     * @dev Get contract version
     */
    function version() external pure returns (string memory) {
        return "1.0.0";
    }

    /**
     * @dev Check if contract is initialized
     */
    function isInitialized() external view returns (bool) {
        return _getInitializedVersion() != 0;
    }
}

Testing Comprehensivo

// test/SecureProtocol.test.js - Tests completos del protocolo
const { expect } = require("chai");
const { ethers, upgrades } = require("hardhat");
const { time, loadFixture } = require("@nomicfoundation/hardhat-network-helpers");

describe("SecureProtocol", function () {
  
  // Fixture para configuración reutilizable
  async function deploySecureProtocolFixture() {
    const [owner, treasury, emergencyAdmin, user1, user2] = await ethers.getSigners();

    // Deploy mock ERC20 token
    const MockToken = await ethers.getContractFactory("MockERC20");
    const token = await MockToken.deploy("Test Token", "TEST", ethers.parseEther("1000000"));

    // Deploy protocol as upgradeable proxy
    const SecureProtocol = await ethers.getContractFactory("SecureProtocol");
    const protocol = await upgrades.deployProxy(SecureProtocol, [
      owner.address,
      treasury.address,
      emergencyAdmin.address,
      1000, // 10% reward rate
      86400, // 1 day min lock
      31536000, // 1 year max lock
    ]);

    // Authorize token for deposits
    await protocol.connect(owner).authorizeToken(token.target, ethers.parseEther("1"));

    // Transfer tokens to users for testing
    await token.transfer(user1.address, ethers.parseEther("10000"));
    await token.transfer(user2.address, ethers.parseEther("10000"));
    await token.transfer(treasury.address, ethers.parseEther("100000")); // For rewards

    return { protocol, token, owner, treasury, emergencyAdmin, user1, user2 };
  }

  describe("Deployment and Initialization", function () {
    it("Should initialize with correct parameters", async function () {
      const { protocol, owner, treasury, emergencyAdmin } = await loadFixture(deploySecureProtocolFixture);
      
      expect(await protocol.owner()).to.equal(owner.address);
      expect(await protocol.treasury()).to.equal(treasury.address);
      expect(await protocol.emergencyAdmin()).to.equal(emergencyAdmin.address);
      expect(await protocol.rewardRate()).to.equal(1000);
      expect(await protocol.minimumLockPeriod()).to.equal(86400);
      expect(await protocol.maximumLockPeriod()).to.equal(31536000);
    });

    it("Should not allow double initialization", async function () {
      const { protocol, owner, treasury, emergencyAdmin } = await loadFixture(deploySecureProtocolFixture);
      
      await expect(
        protocol.initialize(
          owner.address,
          treasury.address,
          emergencyAdmin.address,
          1000,
          86400,
          31536000
        )
      ).to.be.revertedWith("Initializable: contract is already initialized");
    });
  });

  describe("Token Authorization", function () {
    it("Should authorize token correctly", async function () {
      const { protocol, token, owner } = await loadFixture(deploySecureProtocolFixture);
      
      expect(await protocol.authorizedTokens(token.target)).to.be.true;
      expect(await protocol.tokenMinAmounts(token.target)).to.equal(ethers.parseEther("1"));
    });

    it("Should only allow owner to authorize tokens", async function () {
      const { protocol, token, user1 } = await loadFixture(deploySecureProtocolFixture);
      
      await expect(
        protocol.connect(user1).authorizeToken(token.target, ethers.parseEther("1"))
      ).to.be.revertedWithCustomError(protocol, "OwnableUnauthorizedAccount");
    });

    it("Should deauthorize token correctly", async function () {
      const { protocol, token, owner } = await loadFixture(deploySecureProtocolFixture);
      
      await protocol.connect(owner).deauthorizeToken(token.target);
      expect(await protocol.authorizedTokens(token.target)).to.be.false;
    });
  });

  describe("Deposits", function () {
    it("Should allow valid deposits", async function () {
      const { protocol, token, user1 } = await loadFixture(deploySecureProtocolFixture);
      
      const amount = ethers.parseEther("100");
      const lockPeriod = 86400 * 30; // 30 days
      
      // Approve protocol to spend tokens
      await token.connect(user1).approve(protocol.target, amount);
      
      // Deposit
      await expect(
        protocol.connect(user1).deposit(token.target, amount, lockPeriod)
      ).to.emit(protocol, "Deposited")
        .withArgs(user1.address, token.target, amount, lockPeriod, 0);
      
      // Check deposit was recorded
      const deposit = await protocol.getUserDeposit(user1.address, 0);
      expect(deposit.amount).to.equal(amount);
      expect(deposit.lockPeriod).to.equal(lockPeriod);
      expect(deposit.claimed).to.be.false;
      
      // Check TVL updated
      expect(await protocol.totalValueLocked()).to.equal(amount);
    });

    it("Should reject deposits below minimum amount", async function () {
      const { protocol, token, user1 } = await loadFixture(deploySecureProtocolFixture);
      
      const amount = ethers.parseEther("0.5"); // Below minimum
      const lockPeriod = 86400 * 30;
      
      await token.connect(user1).approve(protocol.target, amount);
      
      await expect(
        protocol.connect(user1).deposit(token.target, amount, lockPeriod)
      ).to.be.revertedWithCustomError(protocol, "InsufficientAmount");
    });

    it("Should reject invalid lock periods", async function () {
      const { protocol, token, user1 } = await loadFixture(deploySecureProtocolFixture);
      
      const amount = ethers.parseEther("100");
      
      await token.connect(user1).approve(protocol.target, amount);
      
      // Too short
      await expect(
        protocol.connect(user1).deposit(token.target, amount, 3600)
      ).to.be.revertedWithCustomError(protocol, "InvalidLockPeriod");
      
      // Too long
      await expect(
        protocol.connect(user1).deposit(token.target, amount, 86400 * 400)
      ).to.be.revertedWithCustomError(protocol, "InvalidLockPeriod");
    });

    it("Should reject unauthorized tokens", async function () {
      const { protocol, user1 } = await loadFixture(deploySecureProtocolFixture);
      
      // Deploy another token without authorization
      const MockToken = await ethers.getContractFactory("MockERC20");
      const unauthorizedToken = await MockToken.deploy("Bad Token", "BAD", ethers.parseEther("1000"));
      
      const amount = ethers.parseEther("100");
      const lockPeriod = 86400 * 30;
      
      await expect(
        protocol.connect(user1).deposit(unauthorizedToken.target, amount, lockPeriod)
      ).to.be.revertedWithCustomError(protocol, "UnauthorizedToken");
    });
  });

  describe("Withdrawals", function () {
    async function setupDeposit() {
      const fixture = await loadFixture(deploySecureProtocolFixture);
      const { protocol, token, user1 } = fixture;
      
      const amount = ethers.parseEther("100");
      const lockPeriod = 86400 * 30; // 30 days
      
      await token.connect(user1).approve(protocol.target, amount);
      await protocol.connect(user1).deposit(token.target, amount, lockPeriod);
      
      return { ...fixture, amount, lockPeriod };
    }

    it("Should allow withdrawal after lock period", async function () {
      const { protocol, token, treasury, user1, amount, lockPeriod } = await setupDeposit();
      
      // Fast forward time
      await time.increase(lockPeriod);
      
      // Approve treasury to spend tokens for rewards
      const expectedReward = await protocol.calculateReward(amount, lockPeriod);
      await token.connect(treasury).approve(protocol.target, expectedReward);
      
      const initialBalance = await token.balanceOf(user1.address);
      
      await expect(
        protocol.connect(user1).withdraw(token.target, 0)
      ).to.emit(protocol, "Withdrawn");
      
      const finalBalance = await token.balanceOf(user1.address);
      expect(finalBalance - initialBalance).to.equal(amount + expectedReward);
      
      // Check deposit marked as claimed
      const deposit = await protocol.getUserDeposit(user1.address, 0);
      expect(deposit.claimed).to.be.true;
    });

    it("Should reject early withdrawal", async function () {
      const { protocol, token, user1 } = await setupDeposit();
      
      await expect(
        protocol.connect(user1).withdraw(token.target, 0)
      ).to.be.revertedWithCustomError(protocol, "DepositNotMatured");
    });

    it("Should reject double withdrawal", async function () {
      const { protocol, token, treasury, user1, lockPeriod } = await setupDeposit();
      
      await time.increase(lockPeriod);
      
      const expectedReward = await protocol.calculateReward(ethers.parseEther("100"), lockPeriod);
      await token.connect(treasury).approve(protocol.target, expectedReward);
      
      // First withdrawal
      await protocol.connect(user1).withdraw(token.target, 0);
      
      // Second withdrawal should fail
      await expect(
        protocol.connect(user1).withdraw(token.target, 0)
      ).to.be.revertedWithCustomError(protocol, "AlreadyClaimed");
    });
  });

  describe("Reward Calculation", function () {
    it("Should calculate rewards correctly", async function () {
      const { protocol } = await loadFixture(deploySecureProtocolFixture);
      
      const amount = ethers.parseEther("1000");
      const lockPeriod = 86400 * 365; // 1 year (max period)
      
      const expectedReward = amount * BigInt(1000) / BigInt(10000); // 10% of amount
      const calculatedReward = await protocol.calculateReward(amount, lockPeriod);
      
      expect(calculatedReward).to.equal(expectedReward);
    });

    it("Should scale rewards with lock period", async function () {
      const { protocol } = await loadFixture(deploySecureProtocolFixture);
      
      const amount = ethers.parseEther("1000");
      const halfPeriod = 86400 * 182; // ~6 months
      const fullPeriod = 86400 * 365; // 1 year
      
      const halfReward = await protocol.calculateReward(amount, halfPeriod);
      const fullReward = await protocol.calculateReward(amount, fullPeriod);
      
      // Half period should give approximately half rewards
      expect(halfReward * BigInt(2)).to.be.closeTo(fullReward, ethers.parseEther("1"));
    });
  });

  describe("Emergency Functions", function () {
    it("Should allow owner to pause/unpause", async function () {
      const { protocol, owner } = await loadFixture(deploySecureProtocolFixture);
      
      await protocol.connect(owner).pause();
      expect(await protocol.paused()).to.be.true;
      
      await protocol.connect(owner).unpause();
      expect(await protocol.paused()).to.be.false;
    });

    it("Should allow emergency withdrawal when paused", async function () {
      const { protocol, token, owner, user1 } = await loadFixture(deploySecureProtocolFixture);
      
      // Setup deposit
      const amount = ethers.parseEther("100");
      await token.connect(user1).approve(protocol.target, amount);
      await protocol.connect(user1).deposit(token.target, amount, 86400 * 30);
      
      // Pause protocol
      await protocol.connect(owner).pause();
      
      // Emergency withdrawal
      await expect(
        protocol.connect(user1).emergencyWithdraw(token.target, 0)
      ).to.emit(protocol, "EmergencyWithdraw");
    });

    it("Should reject emergency withdrawal when not paused", async function () {
      const { protocol, token, user1 } = await loadFixture(deploySecureProtocolFixture);
      
      // Setup deposit
      const amount = ethers.parseEther("100");
      await token.connect(user1).approve(protocol.target, amount);
      await protocol.connect(user1).deposit(token.target, amount, 86400 * 30);
      
      // Try emergency withdrawal without pausing
      await expect(
        protocol.connect(user1).emergencyWithdraw(token.target, 0)
      ).to.be.revertedWith("Pausable: not paused");
    });
  });

  describe("Gas Usage Optimization", function () {
    it("Should have reasonable gas costs for deposits", async function () {
      const { protocol, token, user1 } = await loadFixture(deploySecureProtocolFixture);
      
      const amount = ethers.parseEther("100");
      await token.connect(user1).approve(protocol.target, amount);
      
      const tx = await protocol.connect(user1).deposit(token.target, amount, 86400 * 30);
      const receipt = await tx.wait();
      
      // Should be under 200k gas
      expect(receipt.gasUsed).to.be.below(200000);
    });
  });

  describe("Upgradeability", function () {
    it("Should be upgradeable", async function () {
      const { protocol } = await loadFixture(deploySecureProtocolFixture);
      
      // Deploy V2
      const SecureProtocolV2 = await ethers.getContractFactory("SecureProtocolV2");
      const upgraded = await upgrades.upgradeProxy(protocol.target, SecureProtocolV2);
      
      // Should maintain storage
      expect(await upgraded.rewardRate()).to.equal(1000);
    });
  });
});

Scripts de Despliegue y Verificación

// scripts/deploy-production.js - Script de despliegue para producción
const { ethers, upgrades, network } = require("hardhat");
const fs = require('fs');
const path = require('path');

// Configuraciones por red
const NETWORK_CONFIG = {
  mainnet: {
    treasury: "0x...", // Treasury multisig
    emergencyAdmin: "0x...", // Emergency admin multisig
    rewardRate: 500, // 5%
    minLockPeriod: 86400 * 7, // 1 week
    maxLockPeriod: 86400 * 365, // 1 year
    authorizedTokens: [
      {
        address: "0xA0b86a33E6417Dc40441c7395F4aDD0fB0F74a01", // USDC
        minAmount: "1000000" // 1 USDC (6 decimals)
      },
      {
        address: "0xdAC17F958D2ee523a2206206994597C13D831ec7", // USDT
        minAmount: "1000000" // 1 USDT (6 decimals)
      },
      {
        address: "0x6B175474E89094C44Da98b954EedeAC495271d0F", // DAI
        minAmount: ethers.parseEther("1") // 1 DAI
      }
    ]
  },
  
  sepolia: {
    treasury: "0x...", // Test treasury
    emergencyAdmin: "0x...", // Test admin
    rewardRate: 1000, // 10% for testing
    minLockPeriod: 3600, // 1 hour for testing
    maxLockPeriod: 86400 * 30, // 1 month for testing
    authorizedTokens: [
      {
        address: "0x...", // Test USDC
        minAmount: "1000000"
      }
    ]
  }
};

async function main() {
  console.log(`Deploying to ${network.name}...`);
  
  const [deployer] = await ethers.getSigners();
  console.log(`Deploying with account: ${deployer.address}`);
  
  const balance = await ethers.provider.getBalance(deployer.address);
  console.log(`Account balance: ${ethers.formatEther(balance)} ETH`);
  
  const config = NETWORK_CONFIG[network.name];
  if (!config) {
    throw new Error(`No configuration for network: ${network.name}`);
  }
  
  // 1. Deploy the proxy contract
  console.log("\n1. Deploying SecureProtocol proxy...");
  const SecureProtocol = await ethers.getContractFactory("SecureProtocol");
  
  const protocol = await upgrades.deployProxy(SecureProtocol, [
    deployer.address, // Initial owner (will be transferred later)
    config.treasury,
    config.emergencyAdmin,
    config.rewardRate,
    config.minLockPeriod,
    config.maxLockPeriod
  ], {
    initializer: 'initialize',
    kind: 'uups'
  });
  
  await protocol.waitForDeployment();
  console.log(`SecureProtocol deployed to: ${protocol.target}`);
  
  // 2. Wait for confirmations
  console.log("\n2. Waiting for confirmations...");
  const deployTx = protocol.deploymentTransaction();
  await deployTx.wait(5);
  
  // 3. Authorize tokens
  console.log("\n3. Authorizing tokens...");
  for (const token of config.authorizedTokens) {
    console.log(`Authorizing ${token.address} with min amount ${token.minAmount}...`);
    const tx = await protocol.authorizeToken(token.address, token.minAmount);
    await tx.wait();
    console.log(`Token authorized: ${tx.hash}`);
  }
  
  // 4. Verify contracts if not local
  if (network.name !== "hardhat" && network.name !== "localhost") {
    console.log("\n4. Verifying contracts on block explorer...");
    
    try {
      // Get implementation address
      const implAddress = await upgrades.erc1967.getImplementationAddress(protocol.target);
      console.log(`Implementation address: ${implAddress}`);
      
      // Verify implementation
      await hre.run("verify:verify", {
        address: implAddress,
        constructorArguments: [],
      });
      
      console.log("Contract verified successfully");
    } catch (error) {
      console.log("Verification failed:", error.message);
    }
  }
  
  // 5. Save deployment info
  const deploymentInfo = {
    network: network.name,
    proxy: protocol.target,
    implementation: await upgrades.erc1967.getImplementationAddress(protocol.target),
    deployer: deployer.address,
    deploymentTransaction: deployTx.hash,
    blockNumber: deployTx.blockNumber,
    timestamp: new Date().toISOString(),
    config: config,
    gasUsed: deployTx.gasLimit.toString()
  };
  
  const deploymentsDir = path.join(__dirname, '..', 'deployments');
  if (!fs.existsSync(deploymentsDir)) {
    fs.mkdirSync(deploymentsDir, { recursive: true });
  }
  
  fs.writeFileSync(
    path.join(deploymentsDir, `${network.name}.json`),
    JSON.stringify(deploymentInfo, null, 2)
  );
  
  console.log(`\n✅ Deployment completed!`);
  console.log(`Proxy: ${protocol.target}`);
  console.log(`Implementation: ${await upgrades.erc1967.getImplementationAddress(protocol.target)}`);
  console.log(`Deployment info saved to: deployments/${network.name}.json`);
  
  // 6. Post-deployment checklist
  console.log(`\n📋 Post-deployment checklist:`);
  console.log(`□ Transfer ownership to multisig`);
  console.log(`□ Setup monitoring and alerts`);
  console.log(`□ Configure treasury token approvals`);
  console.log(`□ Test all functions on testnet first`);
  console.log(`□ Coordinate with frontend team for integration`);
  console.log(`□ Schedule security audit`);
}

// Handle errors
main()
  .then(() => process.exit(0))
  .catch((error) => {
    console.error("Deployment failed:", error);
    process.exit(1);
  });

Almacenamiento Descentralizado con IPFS

Configuración de Nodo IPFS

# docker-compose-ipfs.yml - Stack IPFS con cluster y gateway
version: '3.8'

services:
  # IPFS Node principal
  ipfs:
    image: ipfs/go-ipfs:latest
    container_name: ipfs-node
    restart: unless-stopped
    ports:
      - "4001:4001"     # P2P swarm
      - "4001:4001/udp" # P2P swarm UDP
      - "5001:5001"     # API
      - "8080:8080"     # Gateway
    volumes:
      - ipfs-data:/data/ipfs
      - ./ipfs-config:/data/ipfs/config
    environment:
      - IPFS_PROFILE=server
      - IPFS_PATH=/data/ipfs
    command: >
      daemon --init --migrate=true
      --enable-pubsub-experiment
      --enable-namesys-pubsub
      --enable-gc

  # IPFS Cluster (para redundancia)
  ipfs-cluster:
    image: ipfs/ipfs-cluster:latest
    container_name: ipfs-cluster
    restart: unless-stopped
    depends_on:
      - ipfs
    ports:
      - "9094:9094"     # HTTP API
      - "9095:9095"     # Proxy API
      - "9096:9096"     # Cluster swarm
    volumes:
      - cluster-data:/data/ipfs-cluster
    environment:
      - CLUSTER_PEERNAME=cluster-peer-1
      - CLUSTER_SECRET=${CLUSTER_SECRET}
      - CLUSTER_IPFSHTTP_NODEMULTIADDRESS=/dns4/ipfs/tcp/5001
      - CLUSTER_CRDT_TRUSTEDPEERS='*'
      - CLUSTER_RESTAPI_HTTPLISTENMULTIADDRESS=/ip4/0.0.0.0/tcp/9094
      - CLUSTER_IPFSPROXY_LISTENMULTIADDRESS=/ip4/0.0.0.0/tcp/9095

  # IPFS Gateway personalizado
  ipfs-gateway:
    image: nginx:alpine
    container_name: ipfs-gateway
    restart: unless-stopped
    depends_on:
      - ipfs
    ports:
      - "8081:80"
      - "8443:443"
    volumes:
      - ./nginx-ipfs.conf:/etc/nginx/nginx.conf
      - ./ssl:/etc/ssl/certs
    
  # Monitoring para IPFS
  ipfs-exporter:
    image: prometheus/ipfs-exporter:latest
    container_name: ipfs-exporter
    restart: unless-stopped
    ports:
      - "9401:9401"
    environment:
      - IPFS_API_URL=http://ipfs:5001
    depends_on:
      - ipfs

volumes:
  ipfs-data:
  cluster-data:

Integración IPFS con dApps

// utils/ipfs.js - Cliente IPFS para aplicaciones
import { create } from 'ipfs-http-client';
import all from 'it-all';
import { concat as uint8ArrayConcat } from 'uint8arrays/concat';

class IPFSService {
  constructor(config = {}) {
    // Configuración por defecto
    this.config = {
      protocol: config.protocol || 'https',
      host: config.host || 'ipfs.infura.io',
      port: config.port || 5001,
      timeout: config.timeout || 60000,
      ...config
    };

    // Cliente IPFS
    this.ipfs = create({
      protocol: this.config.protocol,
      host: this.config.host,
      port: this.config.port,
      timeout: this.config.timeout,
      headers: config.headers || {}
    });

    // Cache local
    this.cache = new Map();
    this.maxCacheSize = config.maxCacheSize || 100;
  }

  /**
   * Subir archivo a IPFS
   */
  async uploadFile(file, options = {}) {
    try {
      const { pin = true, wrapWithDirectory = false } = options;
      
      console.log(`Uploading file to IPFS...`);
      
      const addOptions = {
        pin,
        wrapWithDirectory,
        progress: (prog) => console.log(`Upload progress: ${prog}`)
      };

      const result = await this.ipfs.add(file, addOptions);
      
      console.log(`File uploaded: ${result.cid.toString()}`);
      return {
        cid: result.cid.toString(),
        size: result.size,
        path: result.path
      };
    } catch (error) {
      console.error('IPFS upload error:', error);
      throw new Error(`Failed to upload to IPFS: ${error.message}`);
    }
  }

  /**
   * Subir múltiples archivos
   */
  async uploadFiles(files, options = {}) {
    try {
      const results = [];
      const { pin = true } = options;
      
      for await (const result of this.ipfs.addAll(files, { pin })) {
        results.push({
          cid: result.cid.toString(),
          size: result.size,
          path: result.path
        });
      }
      
      return results;
    } catch (error) {
      console.error('IPFS batch upload error:', error);
      throw new Error(`Failed to upload files to IPFS: ${error.message}`);
    }
  }

  /**
   * Descargar archivo desde IPFS
   */
  async downloadFile(cid, options = {}) {
    try {
      // Verificar cache local
      if (this.cache.has(cid)) {
        console.log(`Retrieved from cache: ${cid}`);
        return this.cache.get(cid);
      }

      console.log(`Downloading from IPFS: ${cid}`);
      
      const stream = this.ipfs.cat(cid, {
        timeout: options.timeout || 30000
      });
      
      const chunks = [];
      for await (const chunk of stream) {
        chunks.push(chunk);
      }
      
      const file = uint8ArrayConcat(chunks);
      
      // Almacenar en cache
      this.addToCache(cid, file);
      
      return file;
    } catch (error) {
      console.error('IPFS download error:', error);
      throw new Error(`Failed to download from IPFS: ${error.message}`);
    }
  }

  /**
   * Obtener metadata de archivo
   */
  async getFileInfo(cid) {
    try {
      const stats = await this.ipfs.object.stat(cid);
      return {
        cid,
        size: stats.CumulativeSize,
        blocks: stats.NumLinks,
        type: await this.getFileType(cid)
      };
    } catch (error) {
      console.error('IPFS file info error:', error);
      throw new Error(`Failed to get file info: ${error.message}`);
    }
  }

  /**
   * Pin archivo (mantenerlo en el nodo)
   */
  async pinFile(cid) {
    try {
      await this.ipfs.pin.add(cid);
      console.log(`File pinned: ${cid}`);
      return true;
    } catch (error) {
      console.error('IPFS pin error:', error);
      return false;
    }
  }

  /**
   * Unpin archivo
   */
  async unpinFile(cid) {
    try {
      await this.ipfs.pin.rm(cid);
      console.log(`File unpinned: ${cid}`);
      return true;
    } catch (error) {
      console.error('IPFS unpin error:', error);
      return false;
    }
  }

  /**
   * Listar archivos pinneados
   */
  async listPinnedFiles() {
    try {
      const pins = [];
      for await (const pin of this.ipfs.pin.ls()) {
        pins.push({
          cid: pin.cid.toString(),
          type: pin.type
        });
      }
      return pins;
    } catch (error) {
      console.error('IPFS list pins error:', error);
      throw new Error(`Failed to list pinned files: ${error.message}`);
    }
  }

  /**
   * Subir JSON metadata (común para NFTs)
   */
  async uploadMetadata(metadata, options = {}) {
    try {
      const jsonString = JSON.stringify(metadata, null, 2);
      const file = new TextEncoder().encode(jsonString);
      
      const result = await this.uploadFile(file, {
        ...options,
        wrapWithDirectory: false
      });
      
      console.log(`Metadata uploaded: ${result.cid}`);
      return result;
    } catch (error) {
      console.error('Metadata upload error:', error);
      throw error;
    }
  }

  /**
   * Descargar y parsear JSON metadata
   */
  async downloadMetadata(cid) {
    try {
      const file = await this.downloadFile(cid);
      const jsonString = new TextDecoder().decode(file);
      return JSON.parse(jsonString);
    } catch (error) {
      console.error('Metadata download error:', error);
      throw new Error(`Failed to download metadata: ${error.message}`);
    }
  }

  /**
   * Crear gateway URL
   */
  getGatewayUrl(cid, gateway = 'https://ipfs.io') {
    return `${gateway}/ipfs/${cid}`;
  }

  /**
   * Verificar conectividad del nodo
   */
  async checkConnection() {
    try {
      const version = await this.ipfs.version();
      console.log(`Connected to IPFS node: ${version.version}`);
      return true;
    } catch (error) {
      console.error('IPFS connection error:', error);
      return false;
    }
  }

  /**
   * Obtener información del nodo
   */
  async getNodeInfo() {
    try {
      const [version, id, peers] = await Promise.all([
        this.ipfs.version(),
        this.ipfs.id(),
        this.ipfs.swarm.peers()
      ]);

      return {
        version: version.version,
        nodeId: id.id,
        addresses: id.addresses,
        connectedPeers: peers.length
      };
    } catch (error) {
      console.error('IPFS node info error:', error);
      throw error;
    }
  }

  /**
   * Limpiar cache
   */
  clearCache() {
    this.cache.clear();
    console.log('IPFS cache cleared');
  }

  // Métodos privados
  addToCache(cid, data) {
    if (this.cache.size >= this.maxCacheSize) {
      const firstKey = this.cache.keys().next().value;
      this.cache.delete(firstKey);
    }
    this.cache.set(cid, data);
  }

  async getFileType(cid) {
    try {
      // Leer los primeros bytes para determinar el tipo
      const stream = this.ipfs.cat(cid, { length: 512 });
      const chunk = await stream.next();
      
      if (!chunk.done) {
        const bytes = chunk.value;
        
        // Magic numbers para tipos comunes
        if (bytes[0] === 0xFF && bytes[1] === 0xD8) return 'image/jpeg';
        if (bytes[0] === 0x89 && bytes[1] === 0x50) return 'image/png';
        if (bytes[0] === 0x47 && bytes[1] === 0x49) return 'image/gif';
        if (bytes[0] === 0x25 && bytes[1] === 0x50) return 'application/pdf';
        
        // Intentar detectar JSON
        try {
          const text = new TextDecoder().decode(bytes);
          JSON.parse(text);
          return 'application/json';
        } catch {
          // No es JSON válido
        }
      }
      
      return 'application/octet-stream';
    } catch {
      return 'unknown';
    }
  }
}

// Utilidades adicionales
export class IPFSMetadata {
  /**
   * Crear metadata estándar para NFT
   */
  static createNFTMetadata({
    name,
    description,
    image,
    attributes = [],
    externalUrl = null,
    animationUrl = null
  }) {
    const metadata = {
      name,
      description,
      image,
      attributes: attributes.map(attr => ({
        trait_type: attr.trait_type,
        value: attr.value,
        ...(attr.display_type && { display_type: attr.display_type })
      }))
    };

    if (externalUrl) metadata.external_url = externalUrl;
    if (animationUrl) metadata.animation_url = animationUrl;

    return metadata;
  }

  /**
   * Validar metadata de NFT
   */
  static validateNFTMetadata(metadata) {
    const required = ['name', 'description', 'image'];
    const missing = required.filter(field => !metadata[field]);
    
    if (missing.length > 0) {
      throw new Error(`Missing required fields: ${missing.join(', ')}`);
    }

    return true;
  }
}

// Hook de React para IPFS
export function useIPFS(config = {}) {
  const [ipfs, setIpfs] = useState(null);
  const [isConnected, setIsConnected] = useState(false);
  const [isLoading, setIsLoading] = useState(false);

  useEffect(() => {
    const initIPFS = async () => {
      setIsLoading(true);
      try {
        const service = new IPFSService(config);
        const connected = await service.checkConnection();
        
        setIpfs(service);
        setIsConnected(connected);
      } catch (error) {
        console.error('Failed to initialize IPFS:', error);
        setIsConnected(false);
      } finally {
        setIsLoading(false);
      }
    };

    initIPFS();
  }, []);

  return { ipfs, isConnected, isLoading };
}

export default IPFSService;

Herramientas DevOps para Web3

CI/CD Pipeline Completo

# .github/workflows/web3-cicd.yml - Pipeline completo para proyecto Web3
name: Web3 CI/CD Pipeline

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main, develop]

env:
  ETHEREUM_NETWORK: sepolia
  NODE_VERSION: '18'
  PYTHON_VERSION: '3.11'

jobs:
  # Análisis estático y linting
  lint-and-analyze:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'
      
      - name: Install dependencies
        run: npm ci
      
      - name: Solidity linting
        run: npx solhint 'contracts/**/*.sol'
      
      - name: JavaScript linting
        run: npx eslint . --ext .js,.ts
      
      - name: Run Slither analysis
        uses: crytic/slither-action@v0.3.0
        id: slither
        with:
          target: 'contracts/'
          slither-config: 'slither.config.json'
      
      - name: Upload Slither results
        uses: actions/upload-artifact@v3
        with:
          name: slither-report
          path: ${{ steps.slither.outputs.stdout }}

  # Testing de smart contracts
  test-contracts:
    runs-on: ubuntu-latest
    needs: lint-and-analyze
    steps:
      - uses: actions/checkout@v4
      
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'
      
      - name: Install dependencies
        run: npm ci
      
      - name: Compile contracts
        run: npx hardhat compile
      
      - name: Run contract tests
        run: npx hardhat test
        env:
          REPORT_GAS: true
      
      - name: Generate coverage report
        run: npx hardhat coverage
      
      - name: Upload coverage to Codecov
        uses: codecov/codecov-action@v3
        with:
          file: ./coverage/lcov.info
          flags: smartcontracts
      
      - name: Contract size check
        run: npx hardhat size-contracts

  # Testing de frontend
  test-frontend:
    runs-on: ubuntu-latest
    needs: lint-and-analyze
    defaults:
      run:
        working-directory: ./frontend
    steps:
      - uses: actions/checkout@v4
      
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'
          cache-dependency-path: './frontend/package-lock.json'
      
      - name: Install dependencies
        run: npm ci
      
      - name: Run frontend tests
        run: npm test -- --coverage --watchAll=false
      
      - name: Build frontend
        run: npm run build
      
      - name: Upload build artifacts
        uses: actions/upload-artifact@v3
        with:
          name: frontend-build
          path: ./frontend/build

  # Security audit
  security-audit:
    runs-on: ubuntu-latest
    needs: [test-contracts, test-frontend]
    if: github.ref == 'refs/heads/main'
    steps:
      - uses: actions/checkout@v4
      
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'
      
      - name: Install dependencies
        run: npm ci
      
      - name: Run MythX analysis
        run: |
          npm install -g mythx-cli
          mythx analyze contracts/ --mode deep
        env:
          MYTHX_API_KEY: ${{ secrets.MYTHX_API_KEY }}
        continue-on-error: true
      
      - name: Run npm audit
        run: npm audit --audit-level moderate

  # Deploy a testnet
  deploy-testnet:
    runs-on: ubuntu-latest
    needs: [test-contracts, test-frontend, security-audit]
    if: github.ref == 'refs/heads/develop'
    environment: testnet
    steps:
      - uses: actions/checkout@v4
      
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'
      
      - name: Install dependencies
        run: npm ci
      
      - name: Deploy to Sepolia
        run: npx hardhat run scripts/deploy.js --network sepolia
        env:
          PRIVATE_KEY: ${{ secrets.SEPOLIA_PRIVATE_KEY }}
          ETHERSCAN_API_KEY: ${{ secrets.ETHERSCAN_API_KEY }}
          INFURA_API_KEY: ${{ secrets.INFURA_API_KEY }}
      
      - name: Verify contracts
        run: npx hardhat verify --network sepolia $CONTRACT_ADDRESS
        env:
          ETHERSCAN_API_KEY: ${{ secrets.ETHERSCAN_API_KEY }}
        continue-on-error: true
      
      - name: Deploy frontend to Vercel
        uses: amondnet/vercel-action@v25
        with:
          vercel-token: ${{ secrets.VERCEL_TOKEN }}
          vercel-org-id: ${{ secrets.ORG_ID }}
          vercel-project-id: ${{ secrets.PROJECT_ID }}
          working-directory: ./frontend

  # Deploy a mainnet (solo en releases)
  deploy-mainnet:
    runs-on: ubuntu-latest
    needs: [test-contracts, test-frontend, security-audit]
    if: github.ref == 'refs/heads/main' && startsWith(github.ref, 'refs/tags/')
    environment: 
      name: production
      url: https://app.myprotocol.com
    steps:
      - uses: actions/checkout@v4
      
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}
          cache: 'npm'
      
      - name: Install dependencies
        run: npm ci
      
      - name: Deploy to Ethereum Mainnet
        run: npx hardhat run scripts/deploy-production.js --network mainnet
        env:
          PRIVATE_KEY: ${{ secrets.MAINNET_PRIVATE_KEY }}
          ETHERSCAN_API_KEY: ${{ secrets.ETHERSCAN_API_KEY }}
          INFURA_API_KEY: ${{ secrets.INFURA_API_KEY }}
      
      - name: Verify contracts on Etherscan
        run: npx hardhat verify --network mainnet $CONTRACT_ADDRESS
        env:
          ETHERSCAN_API_KEY: ${{ secrets.ETHERSCAN_API_KEY }}
      
      - name: Deploy frontend to production
        uses: amondnet/vercel-action@v25
        with:
          vercel-token: ${{ secrets.VERCEL_TOKEN }}
          vercel-org-id: ${{ secrets.ORG_ID }}
          vercel-project-id: ${{ secrets.PROJECT_ID }}
          vercel-args: '--prod'
          working-directory: ./frontend
      
      - name: Notify deployment
        uses: 8398a7/action-slack@v3
        with:
          status: custom
          custom_payload: |
            {
              text: "🚀 Mainnet deployment successful!",
              attachments: [{
                color: 'good',
                fields: [{
                  title: 'Contract Address',
                  value: '${{ env.CONTRACT_ADDRESS }}',
                  short: true
                }]
              }]
            }
        env:
          SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}

  # Post-deployment monitoring
  post-deploy-check:
    runs-on: ubuntu-latest
    needs: [deploy-mainnet]
    if: always()
    steps:
      - name: Health check
        run: |
          curl -f ${{ env.HEALTH_CHECK_URL }} || exit 1
        env:
          HEALTH_CHECK_URL: https://app.myprotocol.com/health
      
      - name: Contract verification
        run: |
          # Verificar que el contrato esté desplegado y funcional
          echo "Verification completed"

Monitoreo y Alertas

# monitoring/web3_monitor.py - Sistema de monitoreo para Web3
import asyncio
import logging
import json
from datetime import datetime, timedelta
from typing import Dict, List, Optional
import aiohttp
from web3 import Web3
from dataclasses import dataclass
import sqlite3
import smtplib
from email.mime.text import MimeText
from email.mime.multipart import MimeMultipart

# Configurar logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

@dataclass
class AlertConfig:
    name: str
    severity: str
    threshold: float
    cooldown_minutes: int = 15

@dataclass
class MonitoringConfig:
    rpc_urls: List[str]
    contract_addresses: List[str]
    check_interval_seconds: int = 60
    alerts: List[AlertConfig]
    webhook_url: Optional[str] = None
    email_config: Optional[Dict] = None

class Web3Monitor:
    def __init__(self, config: MonitoringConfig):
        self.config = config
        self.w3_clients = [Web3(Web3.HTTPProvider(url)) for url in config.rpc_urls]
        self.db_path = "monitoring.db"
        self.init_database()
        self.alert_history = {}
        
    def init_database(self):
        """Inicializar base de datos SQLite para métricas"""
        conn = sqlite3.connect(self.db_path)
        cursor = conn.cursor()
        
        cursor.execute('''
            CREATE TABLE IF NOT EXISTS metrics (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
                metric_name TEXT NOT NULL,
                value REAL NOT NULL,
                metadata TEXT
            )
        ''')
        
        cursor.execute('''
            CREATE TABLE IF NOT EXISTS alerts (
                id INTEGER PRIMARY KEY AUTOINCREMENT,
                timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
                alert_name TEXT NOT NULL,
                severity TEXT NOT NULL,
                message TEXT NOT NULL,
                resolved BOOLEAN DEFAULT FALSE
            )
        ''')
        
        conn.commit()
        conn.close()

    async def start_monitoring(self):
        """Iniciar bucle de monitoreo"""
        logger.info("Starting Web3 monitoring...")
        
        while True:
            try:
                await self.run_checks()
                await asyncio.sleep(self.config.check_interval_seconds)
            except Exception as e:
                logger.error(f"Monitoring error: {e}")
                await asyncio.sleep(30)  # Esperar antes de reintentar

    async def run_checks(self):
        """Ejecutar todas las verificaciones"""
        tasks = [
            self.check_node_connectivity(),
            self.check_block_height(),
            self.check_gas_prices(),
            self.check_contract_states(),
            self.check_transaction_pool(),
            self.check_network_latency()
        ]
        
        results = await asyncio.gather(*tasks, return_exceptions=True)
        
        for i, result in enumerate(results):
            if isinstance(result, Exception):
                logger.error(f"Check {i} failed: {result}")

    async def check_node_connectivity(self):
        """Verificar conectividad de nodos"""
        connected_nodes = 0
        
        for i, w3 in enumerate(self.w3_clients):
            try:
                if w3.is_connected():
                    connected_nodes += 1
                    # Verificar sincronización
                    syncing = w3.eth.syncing
                    if syncing:
                        await self.record_metric(
                            f"node_{i}_sync_progress", 
                            syncing.get('currentBlock', 0) / syncing.get('highestBlock', 1)
                        )
                else:
                    await self.trigger_alert(
                        f"node_{i}_disconnected",
                        "critical",
                        f"Node {i} is disconnected"
                    )
            except Exception as e:
                logger.error(f"Node {i} check failed: {e}")
        
        connectivity_ratio = connected_nodes / len(self.w3_clients)
        await self.record_metric("node_connectivity_ratio", connectivity_ratio)
        
        if connectivity_ratio 0.5:
            await self.trigger_alert(
                "low_node_connectivity",
                "critical",
                f"Only {connected_nodes}/{len(self.w3_clients)} nodes connected"
            )

    async def check_block_height(self):
        """Verificar altura de bloques"""
        block_heights = []
        
        for w3 in self.w3_clients:
            try:
                if w3.is_connected():
                    height = w3.eth.block_number
                    block_heights.append(height)
            except Exception as e:
                logger.error(f"Block height check failed: {e}")
        
        if block_heights:
            max_height = max(block_heights)
            min_height = min(block_heights)
            
            await self.record_metric("max_block_height", max_height)
            await self.record_metric("min_block_height", min_height)
            
            # Verificar diferencia de bloques entre nodos
            if max_height - min_height > 10:
                await self.trigger_alert(
                    "block_height_divergence",
                    "warning",
                    f"Block height divergence: {max_height - min_height} blocks"
                )

    async def check_gas_prices(self):
        """Monitorear precios de gas"""
        for w3 in self.w3_clients:
            try:
                if w3.is_connected():
                    gas_price = w3.eth.gas_price
                    gas_price_gwei = w3.from_wei(gas_price, 'gwei')
                    
                    await self.record_metric("gas_price_gwei", gas_price_gwei)
                    
                    # Alerta por gas alto
                    if gas_price_gwei > 100:  # > 100 gwei
                        await self.trigger_alert(
                            "high_gas_price",
                            "warning",
                            f"High gas price: {gas_price_gwei:.2f} gwei"
                        )
                    break
            except Exception as e:
                logger.error(f"Gas price check failed: {e}")

    async def check_contract_states(self):
        """Verificar estado de contratos"""
        for contract_address in self.config.contract_addresses:
            try:
                # Verificar si el contrato existe
                w3 = self.w3_clients[0]  # Usar primer nodo disponible
                if not w3.is_connected():
                    continue
                
                code = w3.eth.get_code(contract_address)
                if code == b'':
                    await self.trigger_alert(
                        f"contract_not_found_{contract_address}",
                        "critical",
                        f"Contract {contract_address} not found or destroyed"
                    )
                else:
                    # Verificar balance si es necesario
                    balance = w3.eth.get_balance(contract_address)
                    await self.record_metric(
                        f"contract_balance_{contract_address}",
                        w3.from_wei(balance, 'ether')
                    )
                    
            except Exception as e:
                logger.error(f"Contract check failed for {contract_address}: {e}")

    async def check_transaction_pool(self):
        """Monitorear pool de transacciones"""
        for w3 in self.w3_clients:
            try:
                if w3.is_connected():
                    # Obtener estadísticas del txpool (requiere debug API)
                    try:
                        txpool_status = w3.manager.request_blocking("txpool_status", [])
                        pending_count = int(txpool_status.get('pending', '0x0'), 16)
                        queued_count = int(txpool_status.get('queued', '0x0'), 16)
                        
                        await self.record_metric("txpool_pending", pending_count)
                        await self.record_metric("txpool_queued", queued_count)
                        
                        # Alerta por txpool congestionado
                        if pending_count > 10000:
                            await self.trigger_alert(
                                "txpool_congested",
                                "warning",
                                f"Transaction pool congested: {pending_count} pending"
                            )
                    except:
                        # API debug no disponible
                        pass
                    break
            except Exception as e:
                logger.error(f"Txpool check failed: {e}")

    async def check_network_latency(self):
        """Verificar latencia de la red"""
        start_time = datetime.now()
        
        for w3 in self.w3_clients:
            try:
                if w3.is_connected():
                    # Hacer una llamada simple para medir latencia
                    w3.eth.block_number
                    
                    latency = (datetime.now() - start_time).total_seconds()
                    await self.record_metric("network_latency_seconds", latency)
                    
                    if latency > 5:  # > 5 segundos
                        await self.trigger_alert(
                            "high_network_latency",
                            "warning",
                            f"High network latency: {latency:.2f}s"
                        )
                    break
            except Exception as e:
                logger.error(f"Latency check failed: {e}")

    async def record_metric(self, name: str, value: float, metadata: str = None):
        """Grabar métrica en base de datos"""
        conn = sqlite3.connect(self.db_path)
        cursor = conn.cursor()
        
        cursor.execute(
            "INSERT INTO metrics (metric_name, value, metadata) VALUES (?, ?, ?)",
            (name, value, metadata)
        )
        
        conn.commit()
        conn.close()
        
        logger.debug(f"Recorded metric: {name} = {value}")

    async def trigger_alert(self, alert_name: str, severity: str, message: str):
        """Disparar alerta"""
        # Verificar cooldown
        now = datetime.now()
        last_alert = self.alert_history.get(alert_name)
        
        if last_alert and (now - last_alert) timedelta(minutes=15):
            return  # En cooldown
        
        self.alert_history[alert_name] = now
        
        # Grabar alerta
        conn = sqlite3.connect(self.db_path)
        cursor = conn.cursor()
        
        cursor.execute(
            "INSERT INTO alerts (alert_name, severity, message) VALUES (?, ?, ?)",
            (alert_name, severity, message)
        )
        
        conn.commit()
        conn.close()
        
        logger.warning(f"ALERT [{severity}] {alert_name}: {message}")
        
        # Enviar notificaciones
        await self.send_webhook_notification(alert_name, severity, message)
        await self.send_email_notification(alert_name, severity, message)

    async def send_webhook_notification(self, alert_name: str, severity: str, message: str):
        """Enviar notificación por webhook (Discord, Slack, etc.)"""
        if not self.config.webhook_url:
            return
        
        payload = {
            "content": f"🚨 **{severity.upper()}** Alert: {alert_name}",
            "embeds": [{
                "title": alert_name,
                "description": message,
                "color": {"critical": 16711680, "warning": 16776960, "info": 65535}.get(severity, 8421504),
                "timestamp": datetime.now().isoformat(),
                "fields": [
                    {"name": "Severity", "value": severity.upper(), "inline": True},
                    {"name": "Time", "value": datetime.now().strftime("%H:%M:%S UTC"), "inline": True}
                ]
            }]
        }
        
        try:
            async with aiohttp.ClientSession() as session:
                async with session.post(self.config.webhook_url, json=payload) as response:
                    if response.status != 204:
                        logger.error(f"Webhook failed with status {response.status}")
        except Exception as e:
            logger.error(f"Webhook notification failed: {e}")

    async def send_email_notification(self, alert_name: str, severity: str, message: str):
        """Enviar notificación por email"""
        if not self.config.email_config:
            return
        
        try:
            msg = MimeMultipart()
            msg['From'] = self.config.email_config['from']
            msg['To'] = ', '.join(self.config.email_config['to'])
            msg['Subject'] = f"Web3 Alert [{severity.upper()}]: {alert_name}"
            
            body = f"""
            Alert Details:
            
            Name: {alert_name}
            Severity: {severity.upper()}
            Message: {message}
            Time: {datetime.now().isoformat()}
            
            This is an automated message from Web3 Monitor.
            """
            
            msg.attach(MimeText(body, 'plain'))
            
            server = smtplib.SMTP(
                self.config.email_config['smtp_host'], 
                self.config.email_config['smtp_port']
            )
            server.starttls()
            server.login(
                self.config.email_config['username'], 
                self.config.email_config['password']
            )
            
            server.send_message(msg)
            server.quit()
            
        except Exception as e:
            logger.error(f"Email notification failed: {e}")

    async def get_metrics_summary(self, hours: int = 24) -> Dict:
        """Obtener resumen de métricas"""
        conn = sqlite3.connect(self.db_path)
        cursor = conn.cursor()
        
        since = datetime.now() - timedelta(hours=hours)
        
        cursor.execute('''
            SELECT metric_name, AVG(value), MIN(value), MAX(value), COUNT(*)
            FROM metrics
            WHERE timestamp >= ?
            GROUP BY metric_name
        ''', (since,))
        
        results = cursor.fetchall()
        conn.close()
        
        summary = {}
        for name, avg, min_val, max_val, count in results:
            summary[name] = {
                'average': avg,
                'minimum': min_val,
                'maximum': max_val,
                'samples': count
            }
        
        return summary

# Configuración de ejemplo
async def main():
    config = MonitoringConfig(
        rpc_urls=[
            "https://mainnet.infura.io/v3/YOUR_KEY",
            "https://eth-mainnet.alchemyapi.io/v2/YOUR_KEY"
        ],
        contract_addresses=[
            "0x1234567890123456789012345678901234567890",  # Tu contrato
        ],
        check_interval_seconds=30,
        alerts=[
            AlertConfig("node_connectivity", "critical", 0.5),
            AlertConfig("high_gas_price", "warning", 100),
        ],
        webhook_url="https://discord.com/api/webhooks/YOUR_WEBHOOK",
        email_config={
            'smtp_host': 'smtp.gmail.com',
            'smtp_port': 587,
            'username': 'alerts@yourcompany.com',
            'password': 'your_password',
            'from': 'alerts@yourcompany.com',
            'to': ['team@yourcompany.com']
        }
    )
    
    monitor = Web3Monitor(config)
    await monitor.start_monitoring()

if __name__ == "__main__":
    asyncio.run(main())

Futuro de la Infraestructura Web3

Tecnologías Emergentes

La infraestructura Web3 continúa evolucionando rápidamente con nuevas tecnologías que prometen resolver los desafíos actuales:

1. Escalabilidad de Próxima Generación

  • zkEVMs: Ethereum Virtual Machines con pruebas de conocimiento cero
  • Modular Blockchain: Separación de consenso, ejecución y disponibilidad de datos
  • Danksharding: Escalabilidad masiva para Ethereum 2.0

2. Interoperabilidad Avanzada

  • Protocolo LayerZero: Comunicación omnichain nativa
  • Polkadot Parachains: Ecosistemas blockchain interoperables
  • Cosmos IBC: Internet of Blockchains

3. Herramientas de Desarrollo

  • Account Abstraction (ERC-4337): UX simplificado para wallets
  • Frameworks No-Code: Desarrollo Web3 sin programación
  • AI-Powered Auditing: Auditorías automatizadas de smart contracts

Roadmap de Adopción Empresarial

Fase 1: Exploración (0-6 meses)

  • Educación del equipo en tecnologías Web3
  • Desarrollo de POCs básicos
  • Configuración de entorno de desarrollo

Fase 2: Implementación Piloto (6-12 meses)

  • Despliegue en testnets
  • Integración con sistemas existentes
  • Establecimiento de procesos DevOps Web3

Fase 3: Producción (12+ meses)

  • Lanzamiento en mainnet
  • Monitoreo y optimización continua
  • Escalamiento y nuevas funcionalidades

Conclusión

La infraestructura Web3 representa una transformación fundamental en cómo construimos y operamos sistemas distribuidos. Para los profesionales DevOps, dominar estas tecnologías significa:

  • Nuevas competencias: Blockchain, smart contracts, redes P2P
  • Herramientas específicas: Hardhat, IPFS, nodos blockchain
  • Responsabilidades expandidas: Seguridad criptográfica, gestión de claves
  • Oportunidades únicas: Participar en la construcción del futuro de internet

La combinación de blockchain, almacenamiento descentralizado y contratos inteligentes está creando un nuevo paradigma donde la infraestructura es verdaderamente propiedad de los usuarios. Como profesionales DevOps, tenemos la oportunidad de liderar esta transición y construir los sistemas que definirán la próxima era de internet.

El futuro es descentralizado, y la infraestructura Web3 es la clave para hacerlo realidad.

Recursos y Referencias

Documentación Técnica:

Herramientas Esenciales:

Comunidades y Aprendizaje: