Introducing Yellowstone Jet TPU client: a high-performance Solana TPU client in Rust
TL;DR:
- The Gap: Anza’s tpu-client-next is a massive improvement, but it lacks the granular control needed for high-frequency or custom routing workloads.
- The Fix: We extracted the battle-tested sending logic from Jet, our transaction relay engine, into a standalone Rust crate.
- Production-ready: Handles SWQoS, QUIC protocol quirks, and Agave changes out of the box.
- Decoupled: You can now use Cascade’s sending logic without running the whole Jet engine monolith.
View the library | Read the docs
Introduction
Writing a TPU client from scratch is not for the faint of heart. To do it correctly, you have to handle every Agave release, manage complex QUIC connections, track leader schedules, and handle connection caching without creating lock contention.
For the past year, we’ve solved these problems with Yellowstone Jet, our open-source engine in our Cascade Marketplace, which handles transaction delivery via SWQoS. Jet delivers millions of transactions per second in production, but until now, the code was a monolith. The sending logic was tightly coupled to the receiving engine, making customisation extremely hard. To fix this, we isolated and modularised the TPU client, cleaned up the interfaces, and released it as a standalone library: yellowstone-jet-tpu-client. You can now plug this client into your own stack, use only the parts you need, and skip the headache of building it on your own.
Why do we need another TPU client?
The team at Anza has done a great job building tpu-client-next. It shares many design decisions with Jet and solves the performance issues of the legacy ConnectionCache. However, for builders running arbitrage bots, liquidation engines, or custom routing logic, the default client is still missing critical features.
We built this open-source library to fill the following gaps:
- Transaction tracking — The Jet TPU client lets you register a callback that is invoked when a transaction STREAM frame is successfully written, or when a transaction fails or gets dropped by the internal event loop. This enables custom retry logic and improves logging and debugging.
- Override TPU contact info — You can bypass the gossip-provided contact info and supply your own custom TPU addresses.
- Send to arbitrary remote peers — You can bypass the leader schedule entirely, allowing you to implement custom offloading or routing logic.
- Multi-step identity updates — Jet requires the sending identity to remain synchronised across multiple sub-modules. The client supports updating this identity through a controlled, multi-step process before continuing normal operations.
- Blocklist / allowlist support — You can provide a custom blocklist implementation when calling send_txn_with_blocklist.
Implementation walkthrough 🏃♀️➡️🏃🏽♀️➡️🏃🏿♀️➡️
Let’s walk through a simple code example that uses the batteries-includedYellowstoneTpuSender.
We’re building a simple CLI tool that sends lamports to an arbitrary Solana address.
Here’s the Cargo.toml.
[package]
name = "example-jet-tpu-client"
version = "0.1.0"
edition = "2021"
[dependencies]
bincode = "1.3.3"
clap = { version = "4.5.51", features = ["derive"] }
dotenvy = "0.15.0"
futures = "0.3.31"
# Agave Crates
solana-client = "3.0.0"
solana-rpc-client-api = "3.0.0"
solana-rpc-client = "3.0.0"
solana-system-interface = { version = "3.0.0", features = ["bincode"] }
solana-bincode = "3.0.0"
solana-pubkey = "3.0.0"
solana-keypair = "3.0.0"
solana-account = "3.0.0"
solana-hash = "3.0.0"
solana-signature = "3.0.0"
solana-signer = "3.0.0"
solana-transaction = "3.0.0"
solana-commitment-config = "3.0.0"
solana-transaction-error = "3.0.0"
solana-message = "3.0.0"
# End
thiserror = "2.0.17"
tokio = "1.48.0"
yellowstone-jet-tpu-client = { version = "0.1.0", features = ["yellowstone-grpc"] }
Inside the main.rs file, import the following:
use {
clap::Parser,
solana_client::nonblocking::rpc_client::RpcClient,
solana_commitment_config::CommitmentConfig,
solana_hash::Hash,
solana_keypair::{Keypair, Signature},
solana_message::{VersionedMessage, v0},
solana_pubkey::Pubkey,
solana_signer::Signer,
solana_system_interface::instruction::transfer,
solana_transaction::versioned::VersionedTransaction,
std::{
env,
path::PathBuf,
sync::Arc,
vec,
},
yellowstone_jet_tpu_client::{
core::TpuSenderResponse,
yellowstone_grpc::sender::{
Endpoints, NewYellowstoneTpuSender,
YellowstoneTpuSender,
create_yellowstone_tpu_sender_with_callback,
},
},
};
Let’s define clap arguments to use with our CLI:
#[derive(clap::Parser, Debug)]
struct Args {
#[clap(long, short)]
/// Path to .env file to load
dotenv: Option<PathBuf>,
/// Endpoint to Yellowstone gRPC service
#[clap(long, short)]
rpc: Option<String>,
#[clap(long, short)]
grpc: Option<String>,
/// X-Token for Yellowstone gRPC service
x_token: Option<String>,
///
/// Path to identity keypair file
///
identity: Option<PathBuf>,
///
/// Recipient pubkey
///
#[clap(long)]
recipient: Option<String>,
}
We want to support dotenv to facilitate testing, so in our main entrypoint, we start parsing the clap arguments and load the dotenv file too:
#[tokio::main]
async fn main() {
///
/// We support `.env` file with the following environment
/// variables: `RPC_ENDPOINT`,
/// `GRPC_ENDPOINT` and `GRPC_X_TOKEN`.
///
use std::io::Write;
let mut out = std::io::stdout();
let args = Args::parse();
if let Ok(env_path) =
args.dotenv.unwrap_or("./.env".into()).canonicalize()
{
if dotenvy::from_path(env_path).is_err() {
eprintln!("Warning: Failed to load .env file");
}
} else {
eprintln!("Warning: Failed to canonicalize .env file path");
}
let recipient_pubkey: Pubkey = match args.recipient {
Some(recipient) => recipient.parse().expect("Failed to parse recipient pubkey"),
None => Pubkey::new_unique(),
};
let rpc_endpoint = match args.rpc {
Some(endpoint) => endpoint,
None => env::var("RPC_ENDPOINT")
.expect("RPC_ENDPOINT must be set in dotenv file or environment"),
};
let grpc_endpoint = match args.grpc {
Some(endpoint) => endpoint,
None => env::var("GRPC_ENDPOINT")
.expect("GRPC_ENDPOINT must be set in dotenv file or environment"),
};
let grpc_x_token = match args.x_token {
Some(x_token) => Some(x_token),
None => match env::var("GRPC_X_TOKEN") {
Ok(token) => Some(token),
Err(_) => {
eprintln!(
"Warning: GRPC_X_TOKEN not set in dotenv file or environment"
);
None
}
},
};
let identity = match args.identity {
Some(path) => {
solana_keypair::read_keypair_file(path)
.expect("Failed to read identity keypair file")
}
None => {
if let Ok(identity_path) = env::var("IDENTITY") {
solana_keypair::read_keypair_file(identity_path)
.expect("Failed to read identity keypair file from ENV")
} else {
eprintln!(
"IDENTITY not set in dotenv file or environment, using new random identity"
);
Keypair::new()
}
}
};
Now, let's create our actual service objects and, most importantly, our TPU sender!
let rpc_client = Arc::new(RpcClient::new_with_commitment(
rpc_endpoint.clone(),
CommitmentConfig::confirmed(),
));
writeln!(out, "Using identity: {}", identity.pubkey()).expect("writeln");
let endpoints = Endpoints {
rpc: rpc_endpoint,
grpc: grpc_endpoint,
grpc_x_token,
};
///
/// We want to pass our callback so we can track the
/// sucess/failure of our transaction.
let (callback_tx, mut callback_rx) =
tokio::sync::mpsc::unbounded_channel();
let NewYellowstoneTpuSender {
sender,
related_objects_jh: _,
} = create_yellowstone_tpu_sender_with_callback(
Default::default(),
identity.insecure_clone(),
endpoints,
callback_tx,
)
.await
.expect("tpu-sender");
Let’s build our business logic into a reusable function:
async fn send_lamports(
mut tpu_sender: YellowstoneTpuSender,
identity: &Keypair,
recipient: &Pubkey,
lamports: u64,
latest_blockhash: Hash,
) -> Signature {
let instructions = vec![transfer(&identity.pubkey(), &recipient, lamports)];
let transaction = VersionedTransaction::try_new(
VersionedMessage::V0(
v0::Message::try_compile(
&identity.pubkey(),
&instructions,
&[],
latest_blockhash,
)
.expect("try_compile"),
),
&[&identity],
)
.expect("try_new");
let signature = transaction.signatures[0];
/// TPU sender expect bincoded transaction data
/// NOTE: other byte containers are supported, such as
/// `Bytes`, `Arc<Vec<u8>>` and `Arc<[u8]>`.
let bincoded_txn: Vec<u8> =
bincode::serialize(&transaction).expect("bincode::serialize");
// Send the transaction to the current leader
/// Signature will be use to track transaction's lifecycle.
tpu_sender
.send_txn(signature, bincoded_txn)
.await
.expect("send_transaction");
signature
}
Now that we have our send_lamports function, let's stitch everything together. Our final main entry point should look like this:
#[tokio::main]
async fn main() {
use std::io::Write;
let mut out = std::io::stdout();
let args = Args::parse();
if let Ok(env_path) =
args.dotenv.unwrap_or("./.env".into()).canonicalize()
{
if dotenvy::from_path(env_path).is_err() {
eprintln!("Warning: Failed to load .env file");
}
} else {
eprintln!("Warning: Failed to canonicalize .env file path");
}
let recipient_pubkey: Pubkey = match args.recipient {
Some(recipient) => recipient.parse().expect("Failed to parse recipient pubkey"),
None => Pubkey::new_unique(),
};
let rpc_endpoint = match args.rpc {
Some(endpoint) => endpoint,
None => env::var("RPC_ENDPOINT")
.expect("RPC_ENDPOINT must be set in dotenv file or environment"),
};
let grpc_endpoint = match args.grpc {
Some(endpoint) => endpoint,
None => env::var("GRPC_ENDPOINT")
.expect("GRPC_ENDPOINT must be set in dotenv file or environment"),
};
let grpc_x_token = match args.x_token {
Some(x_token) => Some(x_token),
None => match env::var("GRPC_X_TOKEN") {
Ok(token) => Some(token),
Err(_) => {
eprintln!(
"Warning: GRPC_X_TOKEN not set in dotenv file or environment"
);
None
}
},
};
let identity = match args.identity {
Some(path) => {
solana_keypair::read_keypair_file(path)
.expect("Failed to read identity keypair file")
}
None => {
if let Ok(identity_path) = env::var("IDENTITY") {
solana_keypair::read_keypair_file(identity_path)
.expect("Failed to read identity keypair file from ENV")
} else {
eprintln!(
"IDENTITY not set in dotenv file or environment, using new random identity"
);
Keypair::new()
}
}
};
let rpc_client = Arc::new(RpcClient::new_with_commitment(
rpc_endpoint.clone(),
CommitmentConfig::confirmed(),
));
writeln!(out, "Using identity: {}", identity.pubkey()).expect("writeln");
let endpoints = Endpoints {
rpc: rpc_endpoint,
grpc: grpc_endpoint,
grpc_x_token,
};
let (callback_tx, mut callback_rx) =
tokio::sync::mpsc::unbounded_channel();
let NewYellowstoneTpuSender {
sender,
related_objects_jh: _,
} = create_yellowstone_tpu_sender_with_callback(
Default::default(),
identity.insecure_clone(),
endpoints,
callback_tx,
)
.await
.expect("tpu-sender");
const LAMPORTS: u64 = 1000;
let latest_blockhash = rpc_client
.get_latest_blockhash()
.await
.expect("get_latest_blockhash");
let signature = send_lamports(
sender,
&identity,
&recipient_pubkey,
LAMPORTS,
latest_blockhash,
)
.await;
let TpuSenderResponse::TxSent(resp) = callback_rx
.recv()
.await
.expect("receive tpu sender response")
else {
panic!("unexpected tpu sender response");
};
assert!(
resp.tx_sig == signature,
"unexpected tx signature in response"
);
writeln!(
&mut out,
"sent transaction with signature `{}` to validator `{}`",
resp.tx_sig, resp.remote_peer_identity
)
.expect("writeln");
}
You are ready to send lamports. Either use CLI arguments or provide a .env file with your RPC-provided endpoint and x-token to use the code above.
What's next?
This open-source release provides developers with primitives for building their own routing logic, without reinventing the wheel for QUIC handling or leader tracking.
In the near future, we plan to extend the library to support:
- Direct connection support to Cascade Marketplace.
- Yellowstone Shield integration.