Platform Architecture
Overview
Section titled “Overview”Aleatoric provides direct gRPC access to Hyperliquid validator nodes across multiple regions. There are no proxies, no middleware, and no shared load balancers between your trading system and the network.
Each region runs a dedicated F16s_v2 virtual machine with a 1TB NVMe SSD, hosting a full Hyperliquid validator node. Your connection terminates at the node itself. The infrastructure is not shared with other tenants, and there is no abstraction layer translating or buffering your requests.
This design yields fewer failure modes, deterministic latency characteristics, and a data path you can fully reason about.
Data Path
Section titled “Data Path”The connection between your system and the Hyperliquid network traverses exactly two hops — your client to the node, and the node to the validator network. There is nothing in between.
Your trading system connects directly to the validator node over gRPC with TLS termination at the node process itself. There is no API gateway distributing your connection across a pool. There is no caching layer returning stale data. The gRPC call your client makes is handled by the same process that participates in the Hyperliquid validator gossip protocol on port 4001.
This architecture eliminates entire categories of failure:
- No load balancer failures — there is no load balancer.
- No cache invalidation bugs — there is no cache.
- No serialization overhead — gRPC uses Protocol Buffers natively; payloads arrive pre-decoded and typed.
- No queue backpressure — there is no internal message queue between your connection and the node.
Transport Map
Section titled “Transport Map”The diagram below shows the full transport topology, including the protocols and ports used at each boundary.
All client-facing connections use gRPC over TLS 1.3. Internal communication between the validator node and the Hyperliquid network uses the native P2P gossip protocol. There is no protocol translation at any boundary — the transport your client speaks is the transport the node speaks.
Latency Model
Section titled “Latency Model”End-to-end latency for any request decomposes into three components:
Where:
- is the round-trip network propagation time between your infrastructure and the Aleatoric endpoint. This is determined by physical distance, routing, and your network provider. For co-located clients in the same Azure region, this is typically sub-millisecond.
- is the TLS 1.3 handshake cost, incurred once per connection establishment. With session resumption, subsequent requests on an established connection pay zero TLS overhead.
- is the processing time added by any intermediate software layers between your connection and the validator node.
In a typical provider architecture with an API gateway, caching layer, and serialization middleware, dominates the latency budget — often contributing 2-10ms per request due to queue scheduling, JSON serialization, and inter-process communication.
In the Aleatoric architecture:
This is not an approximation for marketing purposes. There is literally no intermediate process. The gRPC server that accepts your connection is the validator node. The only processing that occurs is Protocol Buffer deserialization, which is a deterministic operation on the payload size with no system calls, no I/O waits, and no contention.
The practical consequence is that your observed latency converges to the physical lower bound:
where represents TLS session overhead (zero on established connections) and kernel-level socket processing. This is the same latency you would observe if you ran the validator node yourself.
Zero-Middleware Comparison
Section titled “Zero-Middleware Comparison”Most infrastructure providers insert several layers between your system and the blockchain network. Each layer adds latency, introduces potential failure modes, and makes debugging harder.
| Aspect | Aleatoric | Typical Provider |
|---|---|---|
| Network hops | 2 (Client → Node → Network) | 4–5 (Client → LB → API → Cache → Node) |
| Protocol | gRPC native (Protocol Buffers) | REST/JSON with serialization overhead |
| Payload format | Pre-decoded, strongly typed | Raw bytes requiring client-side parsing |
| Infrastructure | Dedicated VM per region | Shared multi-tenant cluster |
| Tail latency (p99) | Deterministic, dedicated capacity | Variable under shared load |
| Failure modes | Network partition only | LB, cache, API, serialization, network |
| TLS termination | At the node process | At the load balancer (additional hop) |
| Connection model | Persistent gRPC stream | Request/response per call |
| Stale data risk | None (no cache) | Cache TTL dependent |
The practical consequence is that Aleatoric connections behave like a local process call with a network round-trip, rather than a request through a distributed system. Performance stays consistent because there are fewer queues, fewer context switches, and no contention with other tenants.
Regional Topology
Section titled “Regional Topology”Aleatoric operates validator nodes in geographically distributed regions to minimize for clients across major trading centers.
Each region is an independent, fully self-contained deployment:
- Dedicated F16s_v2 VM — 16 vCPUs, 32 GiB memory, 1TB NVMe SSD with locally attached storage for validator state.
- Independent validator node — each region runs its own full node participating in the Hyperliquid consensus network. Regions do not depend on each other.
- Region-local TLS termination — certificates are provisioned and terminated at the node in each region. There is no centralized TLS proxy.
- Independent failure domains — a hardware failure, network partition, or maintenance event in one region does not affect any other region.
Clients should connect to the region with the lowest network latency to their own infrastructure. For clients operating across multiple geographies, connecting to multiple regions simultaneously provides both latency optimization and redundancy.
Current Regions
Section titled “Current Regions”| Region | Location | Azure Region | Use Case |
|---|---|---|---|
| US East | Virginia, USA | East US 2 | Americas-based trading desks |
| Japan East | Tokyo, Japan | Japan East | APAC-based trading desks |
Additional regions will be deployed based on client demand and Hyperliquid network topology changes. Each new region follows the same dedicated-VM, zero-middleware architecture — there is no degraded “edge” tier.
Historical Data (Roadmap)
Section titled “Historical Data (Roadmap)”A historical data layer is on the roadmap. When available, it will include:
- Full L2 book snapshots, trade ticks, and block data — archived and queryable via API with nanosecond-precision timestamps.
- Block replay at wire speed — feed recorded blocks into your strategy at the rate they originally occurred, or at accelerated rates, for backtesting without simulation artifacts.
- Complete liquidation event history — severity classification, cascade detection, and volume tracking for each event.
- Event-driven queries — filter by type, time range, asset, or signal severity to retrieve exactly the data your analysis requires.
This layer will use the same gRPC transport as the live feed, so client code written against the real-time stream will work against historical data with minimal changes. The replay protocol will preserve the original inter-event timing distribution, ensuring that backtest results reflect realistic market microstructure dynamics.
Transparency
Section titled “Transparency”Aleatoric publishes infrastructure metrics openly.
- Uptime metrics — real-time and historical availability data, published on the Status page.
- Performance metrics — throughput, latency percentiles, and availability measurements reported per region.
- Per-service health visibility — individual component status rather than a single aggregate indicator.
- Full incident post-mortems — when something breaks, we publish what happened, why, and what changed to prevent recurrence.
The goal is to give you the same visibility into our infrastructure that you have into your own. If you are making trading decisions based on data flowing through our nodes, you should be able to verify that the path is healthy at every layer.
Next Steps
Section titled “Next Steps”- Connecting — Code examples for each transport, including gRPC client setup and connection pooling.
- Unified Stream — The pre-decoded event feed that combines book updates, trades, and liquidations into a single typed stream.
- Regions & Endpoints — Full endpoint reference with hostnames, ports, and regional availability.
- SDKs — Client libraries for Python and other languages.