We host our proprietary ArchivalRPC server, designed for both cost efficiency and high performance. This solution eliminates the need for the expensive Google BigTable dependency that has been standard since the inception of SVM. Unlike vendor-locked BigTable, we are ready to deploy your SVM chain archives in any location.

We ensure you have zero worries about archival data by incrementally backing up the chain, running every component with high availability, and replicating it across different continents. While performing incremental backups, we run a full verification of the database and backups every 250 slots, ensuring no block is missed.

For further details, you can refer to our documentation or delve deeper into our GitHub. Not sure yet if your SVM chain will need archival data? You can opt for an even more affordable option and just use our backup solution until you decide.

Dedicated resources

No limitations

No RPS enforcements

100% opensource

Method BigTable+DefaultRPC HBase+ArchivalRPC
getTransaction 400-800ms 5-25ms
getBlock 400-800ms 100-200ms
getBlocks-10k 30-40sec 30-40ms!
Performance
We are committed to both cost optimization and performance enhancement. By relocating our database from commercial cloud environments, which often come with multiple layers of networks, we inherently improve performance. However, our strategy extends beyond this.

Our ArchivalRPC server is designed to be significantly more efficient than traditional heavy validators that are typically used for RPC services. This allows it to process more requests per second (RPS) with reduced demands on CPU and RAM.

We are continually developing our ArchivalRPC server with a clear objective to reduce all RPC request response times to below 20 milliseconds. We are approaching this target, demonstrating substantial progress in our performance goals.

Whats in the box
Each SVM archival setup is deployed on dedicated bare-metal machines. We work with multiple bare-metal providers and can offer numerous locations around the globe, always preferring to work with those that do not impose bandwidth limitations, so the only constraint is your dedicated hardware.

All software is deployed to our internal Kubernetes clusters; upon request, we can start a dedicated Kubernetes cluster and give you full access to it. From hardware to the data we write, everything can be customized. If you don't care about voting transactions, we can skip them along with many other filters to optimize storage and costs.

ArchivalRPC is just one small component in the entire archival process; the biggest challenge is the ingestion process, keeping track of the entire chain, not missing any blocks, having incremental backups, and the ability to quickly bootstrap new instances or perform healing operations for existing ones. Check out the full list of containers that are running for just a single network 👉
  • 3x Zookeeper instances
  • 3x Qjournal instances
  • 2x HDFS name nodes
  • 3x HDFS data nodes
  • 2x HBase masters
  • 3x HBase region servers
  • 2x dedicated load balancers (HaProxy)
  • 2x ArchivalRPC instances
  • 1x Active RPC endpoint monitor
  • 6x Block fetchers to Kafka
  • 1x Kafka to buffer incoming blocks
  • 6x ArchivalRPC Kafka to HBase ingestors
  • 1x Chain archiver
  • 1x Chain verifier
Pricing
Need a hand on handling masive Solana mainnet data?
Minimal setup ideal for dev and test networks
€2000 per month or €20 000 per year paid in advance
  • Dedicated baremetal setup
  • 2Gbps unmetered bandwidth
  • 1 location
  • 36TB storage included
  • 20Eur/monthly per additional TB
  • SVM Backups included

order now

HA Setup ideal for main network
€6000 per month or €60 000 per year paid in advance
  • Dedicated baremetal setup
  • 8Gbps unmetered bandwidth
  • 2 locations
  • 120TB storage included per location
  • 20Eur/monthly per additional TB
  • SVM Backups included

order now