Skip to main content

Protocol Overview

Centrix operates as a distributed protocol with no central authority. The network consists of independent participants who coordinate through smart contracts, peer-to-peer communication, and cryptographic mechanisms.

Network Participants

Requestors (Demand Side) Submit computational tasks to the network, specify resource requirements, set maximum acceptable prices, and verify results. Providers (Supply Side) Register hardware with the network, set pricing and availability schedules, bid on tasks, execute tasks in isolated environments, and submit results for verification. Validators Operate verification nodes, validate computation results, maintain consensus on task outcomes, earn fees for services, and can be slashed for dishonest behavior. Token Holders Provide liquidity and stability, earn staking rewards from protocol fees, and participate in governance decisions.

Task Lifecycle

A typical task progresses through the following states:
1. SUBMITTED → Task posted by requestor
2. BIDDING → Providers submit competing bids
3. ASSIGNED → Requestor selects winning provider
4. EXECUTING → Provider processes the task
5. COMPLETED → Results submitted to network
6. VERIFYING → Validators check correctness
7. SETTLED → Payment released, reputation updated
State Transitions:
  • Automatic progression for successful tasks
  • Timeout mechanisms prevent stalling
  • Dispute resolution for contested results
  • Checkpoint recovery for interrupted tasks

Economic Primitives

CXT Token
  • Native currency of the Centrix network
  • Required for all task payments
  • Staked by validators and premium providers
  • ERC-20 compatible for broad accessibility
Pricing Mechanism
  • Dynamic pricing based on supply and demand
  • Providers set base rates + bid adjustments
  • Requestors set maximum acceptable price
  • Market clearing through competitive bidding
  • Premium pricing for urgent tasks
Fee Structure
Total Cost = Provider Payment + Protocol Fee
Protocol Fee = 5% of Provider Payment

Distribution:
- 60% → Token stakers (passive income)
- 25% → Validators (active verification)
- 10% → Development fund (ongoing R&D)
- 5% → Insurance pool (dispute resolution)

Protocol Basics

Task Specification

Tasks are defined using a standardized specification format:
{
  "taskId": "0x...",
  "requestor": "0x...",
  "requirements": {
    "cpu": "8 cores",
    "memory": "32 GB",
    "storage": "100 GB",
    "gpu": "RTX 3080 or equivalent",
    "network": "1 Gbps",
    "duration": "2 hours estimated"
  },
  "input": {
    "type": "ipfs",
    "cid": "Qm..."
  },
  "container": {
    "image": "centrix/blender:latest",
    "command": ["blender", "-b", "scene.blend", "-o", "/output/frame####", "-f", "1..100"]
  },
  "verification": {
    "method": "redundant",
    "redundancy": 3,
    "threshold": 0.67
  },
  "payment": {
    "maxPrice": "50 CXT",
    "escrow": "52.5 CXT"
  }
}
Key Components:
  • Requirements: Hardware specifications needed
  • Input: Data sources (IPFS, HTTP, S3-compatible)
  • Container: Docker image and execution command
  • Verification: How results will be validated
  • Payment: Pricing and escrow details

Provider Discovery

Providers advertise their capabilities through a distributed hash table (DHT): Provider Registration:
{
  "providerId": "0x...",
  "hardware": {
    "cpu": "AMD Ryzen 9 5950X (16 cores)",
    "memory": "64 GB DDR4",
    "storage": "2 TB NVMe",
    "gpu": ["NVIDIA RTX 3090", "NVIDIA RTX 3080 Ti"],
    "network": "10 Gbps"
  },
  "pricing": {
    "cpuPerCoreHour": "0.5 CXT",
    "gpuPerHour": "5 CXT",
    "storagePerGB": "0.01 CXT"
  },
  "reputation": {
    "score": 4.8,
    "completedTasks": 1523,
    "successRate": 99.2,
    "avgResponseTime": "45 seconds"
  },
  "availability": {
    "schedule": "24/7",
    "nextAvailable": "immediate"
  }
}
Matching Algorithm:
  1. Filter providers meeting minimum requirements
  2. Rank by combination of price and reputation
  3. Request bids from top N candidates
  4. Requestor selects based on bid competitiveness

Execution Environment

Tasks execute in containerized environments providing: Security Isolation:
  • Separate namespaces (PID, network, mount)
  • Resource limits (cgroups)
  • Read-only root filesystem
  • Dropped dangerous capabilities
  • seccomp/AppArmor profiles
Resource Monitoring:
  • Real-time CPU/memory/network tracking
  • Alert system for limit violations
  • Automatic throttling if exceeded
  • Usage data for billing accuracy
Fault Tolerance:
  • Regular checkpointing of progress
  • State saved to persistent storage
  • Restart from last checkpoint on failure
  • Automatic provider reassignment if unresponsive

Result Verification

Centrix employs multiple verification strategies depending on task criticality: 1. Redundant Execution
  • Task executed by 3+ independent providers
  • Results compared using cryptographic hashes
  • Consensus determined by majority agreement
  • Dissenters penalized through reputation slashing
2. Zero-Knowledge Proofs
  • Provider generates ZK proof of correct execution
  • Validators verify proof without re-execution
  • Dramatically reduces verification cost
  • Applicable to certain computational classes
3. Spot Checking
  • Random sampling of task results
  • Re-execution on trusted validator nodes
  • Statistical guarantee of correctness
  • Cost-effective for large batches
4. Economic Stakes
  • Providers stake CXT proportional to task value
  • Stake slashed if fraud detected
  • Makes dishonesty economically irrational
  • Bonds returned after successful verification

Architecture

System Components

Layer 1: Blockchain Layer
  • Ethereum mainnet for settlements and token
  • Smart contracts for payments and escrow
  • Layer 2 (Optimistic Rollups) for scalability
  • Cross-chain bridges for multi-chain support
Layer 2: Coordination Layer
  • Task queue and distribution
  • Provider discovery (DHT)
  • Bid collection and matching
  • State management and consensus
Layer 3: Execution Layer
  • Containerized compute environments
  • Resource isolation and monitoring
  • Result collection and submission
  • Local storage and caching
Layer 4: Verification Layer
  • Result validation nodes
  • Consensus mechanisms
  • Dispute resolution
  • Reputation tracking
Layer 5: Data Layer
  • IPFS for distributed storage
  • BitTorrent for large file distribution
  • CDN integration for performance
  • Encrypted data transmission

Network Topology

                  ┌──────────────────┐
                  │   Blockchain     │
                  │   (Ethereum L1)  │
                  └────────┬─────────┘

              ┌────────────┴────────────┐
              │                         │
      ┌───────▼──────┐         ┌───────▼──────┐
      │  L2 Rollup   │         │ Token Bridge │
      │   (Scaling)  │         │ (Multi-chain)│
      └───────┬──────┘         └───────┬──────┘
              │                        │
              └────────────┬───────────┘

                  ┌────────▼─────────┐
                  │  Task Manager    │
                  │  (Coordination)  │
                  └────────┬─────────┘

        ┌──────────────────┼──────────────────┐
        │                  │                  │
┌───────▼──────┐  ┌────────▼────────┐  ┌─────▼────────┐
│  Requestor   │  │  Provider Pool  │  │  Validators  │
│   Network    │  │  (Execution)    │  │  (Verify)    │
└──────────────┘  └─────────────────┘  └──────────────┘

                  ┌────────▼─────────┐
                  │   Data Layer     │
                  │  (IPFS/BT/CDN)   │
                  └──────────────────┘

Communication Protocols

Peer-to-Peer (P2P)
  • libp2p protocol stack
  • Gossipsub for message propagation
  • DHT (Kademlia) for peer discovery
  • NAT traversal (hole punching, TURN relays)
API Interfaces
  • RESTful HTTP API for requestors
  • WebSocket for real-time updates
  • gRPC for high-performance internal communication
  • GraphQL for complex queries
Data Transfer
  • IPFS for content-addressed storage
  • BitTorrent for redundant downloads
  • Direct peer-to-peer when possible
  • Fallback to CDN for reliability

Smart Contract Architecture

Core Contracts:
  1. TaskManager.sol - Task lifecycle, bid management, escrow
  2. PaymentProcessor.sol - Token transfers, fee distribution, slashing
  3. ReputationRegistry.sol - Reputation scores, performance data, history
  4. VerificationArbitration.sol - Dispute resolution, validator voting, appeals
Contract Flow:
// Task submission
1. Requestor approves CXT spending
2. TaskManager.createTask() called
3. Escrow locked in contract
4. TaskCreated event emitted

// Provider matching
5. Providers submit bids off-chain
6. Requestor calls selectProvider()
7. Task assigned, execution begins

// Payment settlement
8. Provider submits results
9. Verification process initiated
10. Validators confirm correctness
11. PaymentProcessor.releasePayment()
12. Fees distributed to stakeholders

Scalability Architecture

Horizontal Scaling
  • Add more providers → Linear capacity increase
  • Sharded task queues for parallelism
  • Geographic distribution reduces latency
Layer 2 Solutions
  • Optimistic Rollups for 10,000+ TPS
  • Batch processing for gas efficiency
  • Off-chain computation, on-chain settlement
Caching Strategies
  • Provider metadata cached locally
  • Popular task templates pre-loaded
  • IPFS data pinned on multiple nodes
  • CDN for static resources
Performance Targets
  • Task Submission: < 1 second
  • Provider Matching: < 5 seconds
  • Result Verification: < 60 seconds
  • Payment Settlement: < 2 minutes
  • Network Capacity: 1M+ concurrent tasks