The end of the black box: Triton’s new RPC pricing
TL;DR
- Traditional RPC pricing relies on abstract compute units and rigid fixed plans
- This makes infrastructure costs hard to predict, track, and plan around
- Credits and CUs hide the gap between “cheap” and “expensive” requests in lengthy conversion tables
- Fixed tiers push you into bundles bloated with features you don't need
- Developers are forced to spend weeks forecasting their traffic patterns and still risk overpaying, or underprovisioning and hitting overages
- That’s why we moved to a model where you pay for what you actually use, finally making RPC pricing simple, fair, and transparent
- We split the bill into simple categories based on two real cost drivers: bandwidth (moving data) and compute complexity (processing it)
- This creates a feedback loop where you can see exactly which parts of your workload drive your bill and optimise them
- Triton’s connection limits, RPS, and unit costs are the same on every subscription, so you are never forced to upgrade until you are ready to scale
- We also don’t charge an “emergency” premium, and bill all overages at base rates
Introduction
In the infrastructure market, pricing is often marketed as simple and transparent, but once you run real workloads, it becomes confusing and hard to predict.
If you have built on Solana for any length of time, you have probably run into the credits or compute units (CU) model and the strict fixed plans that sit under it. You sign up for a plan that promises “50 million requests.” You deploy your bot or dashboard. A week later, your service is turned off, or worse, you're silently shifted into "overage" pricing, where every request costs 3x compared to the base plan.
Providers introduce credits and CUs to make things feel simpler on the surface, but in practice, the model often traps teams that don’t have the time to simulate every query pattern up front.
At Triton One, we think a pricing model that optimises for marketing and margins instead of developer success is fundamentally broken. To walk the talk, we built a pricing model that’s truly transparent, fair and straightforward to work with, with no CUs, rigid bundles, or per-method credit tables. You get enterprise performance at a developer price point.
“Standard” pricing models fail most builders
How credits and CUs hide your real costs
One of the biggest sources of confusion for Solana developers is the disconnect between a request and a credit.
100 million credits sounds like a lot if you assume 1 request equals 1 credit. But once you learn that the query you use most often costs 40 credits to run, it becomes clear you’ll hit your limit much sooner than a month.
Not all requests are created equal. A simple getSlot or getBalance is lightweight and puts almost no strain on the node. A getProgramAccounts query with complex filters might require the node to scan memory, filter results, and serialise a large payload.
In the CU model, that heavy request might be weighted at 50, 100 or even 500 credits.
If you're building a trading platform, doing HFT or spinning up indexers, you're not sending “average” queries. You're sending expensive ones. As a result, your “100 million credits” can evaporate after only a few hundred thousand actual calls.
To be fair, many providers publish detailed CU tables for every method. The problem is that this information is not practical to use in real life. You're expected to trust an “average” CU per request, or spend days modelling every call type and traffic pattern before you even pick a plan.
In the end, most teams guess, ship, and only discover the real cost once their infrastructure’s already built on a closed stack that’s painful to move away from.
How simple tiers force you to overpay
To make things worse, these credits are often locked inside rigid, one-size-fits-all plans. You end up paying for an entire bundle loaded with extras you don’t want just to get the one thing you actually need.
This leads to 2 predictable outcomes:
Paying for air
You upgrade to the $499 tier for the higher throughput ceiling while knowing you won’t need all the “included” credits and features. You’re paying for bundled resources you never touch, and end up subsidising other users who can fully consume what that tier includes.
The overage cliff
You pick a plan that looks fine on paper, but because your queries are heavier than their “average,” you run out by week 2. Now you’re paying punitive overage rates just to keep your app online.
From flat fees to usage-based pricing
For years, Triton avoided the credit and tier mess by charging a flat monthly fee. It was simple and honest: 1 price, unlimited requests, no overages and no special rate limits depending on a plan.
But simplicity came with tradeoffs. The high entry cost made it harder for solo developers, hackathon teams, and early-stage startups to get access to our premium infrastructure.
So we asked ourselves: what pricing would work for the developer with a $150 budget and the institution with a $50,000 one, without subsidising one or overcharging the other?
We landed on a model where people pay for what they actually use.
- It's simple because there are only two usage dimensions with clear prices: gigabytes of bandwidth and millions of queries
- It's transparent because you can see exactly which part of your workload drives cost
- And it's fair because limits and unit prices don't change when you cross an arbitrary threshold
How do we price usage?
In practice, the cost of RPC comes down to 2 things: how much data you move and how much work the node has to do to serve it. We bill those 2 cost drivers separately as bandwidth and compute complexity. Because pricing tracks actual cost, we offer the best rates we can to every builder without hiding anything in bundles, CUs or overage tables.
Bandwidth. Moving data costs money. Whether it's websocket traffic or large RPC payloads, we charge a simple rate per gigabyte transferred.
Compute complexity. We group requests into simple categories by how hard they are to serve. Since streaming doesn’t require much compute, it incurs only bandwidth costs, whereas RPC calls have a per-million-query rate.
You can find the complete pricing list on our website, but here’s a summary:
- All streaming services: $0.08/GB
- Standard RPC calls, gTLA and Steamboat (gPA): $0.08/GB + $10/M
- Recent ledger requests (last 3 epochs): $0.08/GB + $10/M
- Historical ledger requests: $0.08/GB + $25/M
- Metaplex DAS, Photon API: $0.08/GB + $50/M
How is this better than classic pay-as-you-go?
Most pay‑as‑you‑go models still rely on credits and complex tables. While that reduces the risk of surprise overage rates or overpaying for bloated bundles, cost forecasting remains almost as tricky as with fixed tiers.
Our new pricing is usage-based, but also designed to be simple to work with. Instead of abstract units, your bill maps back to a small set of real cost drivers you can actually track and plan around. That creates a direct feedback loop with your code, making it much easier to identify bottlenecks early and optimise them fast.
You typically start with a prepaid PAYG balance to test, benchmark your traffic, and quantify your infrastructure needs. Once your pattern is stable, we turn that into a tailored monthly plan that covers your expected load, so you get predictable billing and can go into overages at the base rate instead of getting cut off like on a prepaid balance.
Unlocking premium infrastructure for every builder
This shift is about removing barriers.
By moving to usage-based pricing, we're unlocking Triton’s enterprise-grade RPC stack for everyone. A builder can now start with a $125 prepaid balance and get the same speed, latency, and reliability as our largest institutional clients on the shared infrastructure.
We're not hiding the complexity. We're exposing it so you can control it.
Fair. Transparent. Developer-centric.