Skip to content

gRPC

The gRPC bridge provides low-latency streaming access to Hyperliquid market data via Protocol Buffers over HTTP/2 with TLS. All connections use channel-level encryption and require API key authentication via gRPC metadata.

Every response carries microsecond-precision timestamps at both the ingest and publish stages of the pipeline. Given ingest timestamp tingestt_{\text{ingest}} and publish timestamp tpublisht_{\text{publish}}, the internal processing latency is:

Δt=tpublishtingest[μs]\Delta t = t_{\text{publish}} - t_{\text{ingest}} \quad [\mu s]

This value is typically on the order of single-digit microseconds for cached responses and tens of microseconds for live computation, enabling clients to distinguish bridge overhead from upstream and network latency.

RegionEndpointNotes
US (East US 2)hl.grpc.aleatoric.systems:443Primary
JP (Japan East)hl-jp.grpc.aleatoric.systems:443Live production edge

Both production endpoints terminate TLS at the edge. No plaintext connections are accepted.

PercentileTargetTypical
p50< 10 ms2 — 5 ms (same-region)
p95< 50 ms15 — 30 ms
p99< 150 ms40 — 80 ms

Latency is measured end-to-end from the moment the bridge receives an upstream tick to the moment the serialized protobuf frame is written to the client socket. Cross-region connections (e.g., EU client to US endpoint) will observe additional network round-trip time on top of these figures.

syntax = "proto3";
package aleatoric.hl.v1;
// Primary market data service for Hyperliquid.
service PriceService {
// Unary RPCs
rpc GetMidPrice(MidPriceRequest) returns (MidPriceResponse);
rpc GetBlockNumber(BlockNumberRequest) returns (BlockNumberResponse);
// Server-streaming RPCs
rpc StreamMids(StreamMidsRequest) returns (stream MidPriceResponse);
rpc StreamTrades(StreamTradesRequest) returns (stream TradeEvent);
rpc StreamL2Book(StreamL2BookRequest) returns (stream L2BookSnapshot);
rpc StreamLiquidations(StreamLiquidationsRequest)
returns (stream LiquidationEvent);
}
// ── Request messages ──────────────────────────────────────────────
message MidPriceRequest {
string coin = 1; // Asset symbol, e.g. "BTC", "ETH"
}
message StreamMidsRequest {
// Empty — subscribes to all coins.
}
message StreamTradesRequest {
string coin = 1; // Required. Asset symbol to subscribe to.
}
message StreamL2BookRequest {
string coin = 1; // Required. Asset symbol.
int32 depth = 2; // Max levels per side (default 20, max 200).
}
message StreamLiquidationsRequest {
// Empty — subscribes to all liquidation events.
}
message BlockNumberRequest {
// Empty.
}
// ── Response messages ─────────────────────────────────────────────
message MidPriceResponse {
string coin = 1;
double price = 2; // Mid price in USD
double best_bid = 3; // Best bid price
double best_ask = 4; // Best ask price
int64 ts_ms = 5; // Upstream timestamp (ms since epoch)
int64 upstream_ts_ms = 6; // Source exchange timestamp (ms)
int64 ingest_ts_us = 7; // Bridge ingest time (μs since epoch)
int64 publish_ts_us = 8; // Bridge publish time (μs since epoch)
bool cached = 9; // True if served from cache
}
message TradeEvent {
string coin = 1;
double price = 2; // Execution price in USD
double size = 3; // Fill size in base asset
string side = 4; // "buy" or "sell" (taker side)
int64 ts_ms = 5;
int64 ingest_ts_us = 6;
int64 publish_ts_us = 7;
}
message L2BookSnapshot {
string coin = 1;
repeated PriceLevel bids = 2;
repeated PriceLevel asks = 3;
int64 ts_ms = 4;
int64 ingest_ts_us = 5;
int64 publish_ts_us = 6;
}
message PriceLevel {
double price = 1; // Limit price in USD
double size = 2; // Aggregate size at this level
}
message LiquidationEvent {
string coin = 1;
string side = 2; // "long" or "short"
double size = 3; // Liquidated position size
double mark_price = 4; // Mark price at liquidation
int64 ts_ms = 5;
int64 ingest_ts_us = 6;
int64 publish_ts_us = 7;
}
message BlockNumberResponse {
int64 block_number = 1;
int64 ts_ms = 2;
}
FieldTypeUnitDescription
coinstringAsset symbol (e.g., "BTC", "ETH", "SOL")
pricedoubleUSDMid price, defined as pbid+pask2\frac{p_{\text{bid}} + p_{\text{ask}}}{2}
best_biddoubleUSDTop-of-book bid price
best_askdoubleUSDTop-of-book ask price
sizedoubleBase assetFill or level size in native units
sidestringTaker direction ("buy" / "sell") or position direction ("long" / "short")
mark_pricedoubleUSDOracle mark price at the time of the event
ts_msint64msUpstream exchange timestamp (milliseconds since Unix epoch)
upstream_ts_msint64msOriginal source timestamp before any bridge processing
ingest_ts_usint64μs\mu sTimestamp when the message entered the bridge ingest pipeline
publish_ts_usint64μs\mu sTimestamp when the serialized response was dispatched to the client
cachedbooltrue if the response was served from the in-memory cache rather than a live upstream tick

The ingest_ts_us and publish_ts_us fields use microsecond resolution (10610^{-6} s). Together they allow clients to compute the bridge processing latency Δt\Delta t per message:

Δt=publish_ts_usingest_ts_us\Delta t = \texttt{publish\_ts\_us} - \texttt{ingest\_ts\_us}

For cached responses, Δt\Delta t is typically <5  μs< 5\;\mu s. For live ticks flowing through the full normalization pipeline, expect Δt[10,200]  μs\Delta t \in [10, 200]\;\mu s depending on message complexity.

All RPCs require a valid API key passed via gRPC metadata. The key must be sent with every call (unary) or at stream initiation (server-streaming).

Metadata KeyValue
x-api-keyYour API key (e.g., ak_live_...)
import grpc
ENDPOINT = "hl.grpc.aleatoric.systems:443"
API_KEY = "ak_live_your_key_here"
# Create a secure channel with TLS
channel = grpc.secure_channel(ENDPOINT, grpc.ssl_channel_credentials())
stub = PriceServiceStub(channel)
metadata = [("x-api-key", API_KEY)]
# Unary call
response = stub.GetMidPrice(
MidPriceRequest(coin="BTC"),
metadata=metadata,
)
print(f"BTC mid: {response.price}")
Terminal window
grpcurl \
-H "x-api-key: ak_live_your_key_here" \
-d '{"coin": "BTC"}' \
hl.grpc.aleatoric.systems:443 \
aleatoric.hl.v1.PriceService/GetMidPrice
import grpc
from aleatoric.hl.v1 import price_service_pb2 as pb
from aleatoric.hl.v1 import price_service_pb2_grpc as rpc
ENDPOINT = "hl.grpc.aleatoric.systems:443"
API_KEY = "ak_live_your_key_here"
channel = grpc.secure_channel(ENDPOINT, grpc.ssl_channel_credentials())
stub = rpc.PriceServiceStub(channel)
metadata = [("x-api-key", API_KEY)]
# Server-streaming call — returns an iterator
stream = stub.StreamMids(pb.StreamMidsRequest(), metadata=metadata)
for tick in stream:
latency_us = tick.publish_ts_us - tick.ingest_ts_us
print(f"{tick.coin} mid={tick.price:.2f} "
f"bid={tick.best_bid:.2f} ask={tick.best_ask:.2f} "
f"bridge_lat={latency_us} μs")
stream = stub.StreamTrades(
pb.StreamTradesRequest(coin="ETH"),
metadata=metadata,
)
for trade in stream:
print(f"ETH trade: {trade.side} {trade.size:.4f} @ {trade.price:.2f}")
Terminal window
grpcurl \
-H "x-api-key: ak_live_your_key_here" \
-d '{"coin": "BTC", "depth": 5}' \
hl.grpc.aleatoric.systems:443 \
aleatoric.hl.v1.PriceService/StreamL2Book
Terminal window
grpcurl \
-H "x-api-key: ak_live_your_key_here" \
-d '{}' \
hl.grpc.aleatoric.systems:443 \
aleatoric.hl.v1.PriceService/StreamLiquidations

The bridge uses standard gRPC status codes. Clients should handle these in retry and alerting logic.

gRPC CodeNumericCauseRecommended Action
UNAUTHENTICATED16Missing or invalid x-api-key metadataVerify key is active and correctly formatted
PERMISSION_DENIED7Key valid but tier insufficient for this RPCUpgrade plan or contact sales
RESOURCE_EXHAUSTED8Rate limit exceeded for your tierBack off exponentially; see Rate Limits
UNAVAILABLE14Bridge temporarily unreachable (deploy, upstream outage)Reconnect with exponential backoff + jitter
INVALID_ARGUMENT3Malformed request (e.g., unknown coin symbol)Check request parameters
DEADLINE_EXCEEDED4Client-set deadline elapsed before responseIncrease deadline or check network path
INTERNAL13Unexpected server-side errorRetry once; if persistent, contact support

gRPC channels should be configured with keepalive pings to detect dead connections before the TCP stack does. Recommended settings:

options = [
("grpc.keepalive_time_ms", 30_000), # Send ping every 30 s
("grpc.keepalive_timeout_ms", 10_000), # Wait 10 s for pong
("grpc.keepalive_permit_without_calls", 1), # Ping even when idle
("grpc.http2.max_pings_without_data", 0), # No limit
("grpc.max_receive_message_length", 16 * 1024 * 1024), # 16 MB
]
channel = grpc.secure_channel(ENDPOINT, grpc.ssl_channel_credentials(), options)

For server-streaming RPCs, the server will close the stream on deploy or upstream disconnect. Clients must handle UNAVAILABLE and re-subscribe:

import time
import random
def stream_with_reconnect(stub, metadata, max_retries=None):
"""Exponential backoff reconnection loop."""
attempt = 0
while max_retries is None or attempt < max_retries:
try:
stream = stub.StreamMids(
pb.StreamMidsRequest(),
metadata=metadata,
)
attempt = 0 # Reset on successful connection
for tick in stream:
yield tick
except grpc.RpcError as e:
if e.code() == grpc.StatusCode.UNAUTHENTICATED:
raise # Do not retry auth failures
attempt += 1
delay = min(2 ** attempt, 60) + random.uniform(0, 1)
print(f"Stream disconnected ({e.code().name}), "
f"reconnecting in {delay:.1f}s...")
time.sleep(delay)
  1. One channel per endpoint. gRPC multiplexes RPCs over a single HTTP/2 connection. Creating multiple channels to the same endpoint wastes resources.
  2. Reuse stubs. Stubs are lightweight wrappers around the channel and safe to share across threads.
  3. Set deadlines on unary calls. Streaming RPCs are long-lived by design, but unary calls like GetMidPrice should carry a deadline (e.g., 5 s) to avoid hanging on network partitions.
  4. Monitor RESOURCE_EXHAUSTED. If you receive this status, reduce call frequency rather than retrying immediately.
  5. Choose the region your plan entitles you to use. hl.grpc.aleatoric.systems remains the default US path, while hl-jp.grpc.aleatoric.systems is the Japan production hostname for JP-enabled clients.