Block
—
Finalized
—
Validators
—
Blocks/sec
—
TPS
—
Chain
Total Transactions—
Accounts—
Peers—
Uptime—
Codebase101K LOC Rust
Tests1,234 passing
Regions6 continents
Nodes — 6 Continents
Sharded AI — One Model, Split Across the Network PIPELINE-PARALLEL
A model that cannot fit on any single node is split at transformer layer boundaries. Each request flows through the pipeline of shard-holders via HTTP, with BLAKE3 hashes verifying every hidden state in transit. Pure i64 arithmetic. 0% quality loss vs the source GGUF.
Model
—
Total Layers
—
Shards
—
Runs Served
—
Bytes Forwarded
—
🔐
Model identity verified
All shards must report the same BLAKE3 model_id, otherwise the pipeline cannot produce trustworthy output.
—
If you tried this on 1 node
✗ OOM
— GB
Full model RAM. Most consumer/cloud nodes have 8 GB max. Won't load.
Sharded across the network
✓ FITS
— GB / node
Each node holds ~ layers. Together = full model. Loads everywhere.
Live Pipeline · Each box is one node holding a contiguous layer range
Hover for details · Click ▶ to send a token through
No shards announced yet. Start nodes with
--shard-start --shard-end
Try:
🚀
Join the network as a shard holder in one command
Persistent service. Daily auto-update. Auto-detects your platform. macOS arm64/x86, Linux x86_64/aarch64. ~3 minutes from curl to running node.
curl -sSL https://raw.githubusercontent.com/FerrumVir/arc-chain/main/scripts/install-community-node.sh | bash
Or run the demo end-to-end without installing:
curl -sSL https://raw.githubusercontent.com/FerrumVir/arc-chain/main/scripts/arc-demo.sh | bashOr run the same prompt on every device in parallel (load balancing demo)
Parallel AI Inference (Load Balancing) LIVE
Each node runs a full copy of the SAME model. Same prompt → same hash on every device → cross-platform determinism proven.
Inference Nodes
—
Speedup
6.2×
Total Runs
—
Same prompt sent to all 8 nodes simultaneously. All produce identical hash = cross-device determinism proven.
Or click any device to run a quick test
Compares 1 node sequential vs 8 nodes parallel. Same prompt. Different machines. Identical hash.
Sequential (1 node)
—
Distributed (8 nodes)
—
Speedup
—
—
Live Inference Feed
Loading...
Recent Transactions
Loading...