Documentation Index Fetch the complete documentation index at: https://helius-auto-translations.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
What You’ll Build
By the end of this guide, you’ll have a working gRPC stream that monitors Solana account updates in real-time with automatic reconnection and error handling.
Choose Your Access Method
Select how you want to access Yellowstone gRPC:
LaserStream Recommended for most users
Multi-tenant, highly available
Automatic failover and backfill
Quick setup with API key
Developer+ (devnet), Business+ (mainnet)
Get LaserStream Access →
Dedicated Nodes For high-volume or custom needs
Exclusive gRPC endpoint
Guaranteed resources
Get Dedicated Node →
Set Up Your Environment
Create a new project and install dependencies: TypeScript/JavaScript
Rust
Go
mkdir solana-grpc-stream
cd solana-grpc-stream
npm init -y
npm install @triton-one/yellowstone-grpc bs58
npm install typescript ts-node @types/node --save-dev
npx tsc --init
cargo new solana-grpc-stream
cd solana-grpc-stream
Add to Cargo.toml: [ dependencies ]
yellowstone-grpc-client = "1.13.0"
yellowstone-grpc-proto = "1.13.0"
tokio = { version = "1.0" , features = [ "full" ] }
anyhow = "1.0"
futures = "0.3"
tonic = "0.10"
mkdir solana-grpc-stream
cd solana-grpc-stream
go mod init solana-grpc-stream
go get github.com/rpcpool/yellowstone-grpc/examples/golang@latest
go get google.golang.org/grpc@v1.67.1
Get Your Credentials
Obtain your gRPC endpoint and authentication: LaserStream
Dedicated Nodes
Sign up for Developer+ plan (devnet) or Business+ plan (mainnet) at dashboard.helius.dev
Get your API key from the dashboard
Choose your regional endpoint:
Mainnet Endpoints:
US East: https://laserstream-mainnet-ewr.helius-rpc.com
US West: https://laserstream-mainnet-slc.helius-rpc.com
Europe: https://laserstream-mainnet-fra.helius-rpc.com
Asia: https://laserstream-mainnet-tyo.helius-rpc.com
Devnet: https://laserstream-devnet-ewr.helius-rpc.com
Order a dedicated node from dashboard.helius.dev
Once provisioned, you’ll receive:
Your gRPC endpoint (typically your-node.rpc.helius.dev:2053)
Your authentication token
Create Your First Stream
Create a robust stream manager with the following complete example: Create stream-manager.ts: import Client , { CommitmentLevel , SubscribeRequest } from "@triton-one/yellowstone-grpc" ;
import * as bs58 from 'bs58' ;
export class StreamManager {
private client : Client ;
private stream : any ;
private isConnected = false ;
private reconnectAttempts = 0 ;
private readonly maxReconnectAttempts = 10 ;
private readonly baseReconnectDelay = 1000 ; // 1 second
constructor (
private endpoint : string ,
private apiKey : string ,
private onData : ( data : any ) => void ,
private onError ?: ( error : any ) => void
) {
this . client = new Client ( endpoint , apiKey , {
"grpc.max_receive_message_length" : 64 * 1024 * 1024
});
}
async connect ( subscribeRequest : SubscribeRequest ) : Promise < void > {
try {
console . log ( `Connecting to ${ this . endpoint } ...` );
this . stream = await this . client . subscribe ();
this . isConnected = true ;
this . reconnectAttempts = 0 ;
// Set up event handlers
this . stream . on ( "data" , this . handleData . bind ( this ));
this . stream . on ( "error" , this . handleStreamError . bind ( this ));
this . stream . on ( "end" , () => this . handleDisconnect ( subscribeRequest ));
this . stream . on ( "close" , () => this . handleDisconnect ( subscribeRequest ));
// Send subscription request
await this . writeRequest ( subscribeRequest );
// Start keepalive
this . startKeepalive ();
console . log ( "✅ Connected and subscribed successfully" );
} catch ( error ) {
console . error ( "Connection failed:" , error );
await this . reconnect ( subscribeRequest );
}
}
private async writeRequest ( request : SubscribeRequest ) : Promise < void > {
return new Promise (( resolve , reject ) => {
this . stream . write ( request , ( err : any ) => {
if ( err ) reject ( err );
else resolve ();
});
});
}
private handleData ( data : any ) : void {
try {
// Convert buffers to readable format
const processedData = this . processBuffers ( data );
this . onData ( processedData );
} catch ( error ) {
console . error ( "Error processing data:" , error );
}
}
private processBuffers ( obj : any ) : any {
if ( ! obj ) return obj ;
if ( Buffer . isBuffer ( obj ) || obj instanceof Uint8Array ) {
return bs58 . default . encode ( obj );
}
if ( Array . isArray ( obj )) {
return obj . map ( item => this . processBuffers ( item ));
}
if ( typeof obj === 'object' ) {
return Object . fromEntries (
Object . entries ( obj ). map (([ k , v ]) => [ k , this . processBuffers ( v )])
);
}
return obj ;
}
private handleStreamError ( error : any ) : void {
console . error ( "Stream error:" , error );
this . isConnected = false ;
if ( this . onError ) this . onError ( error );
}
private async handleDisconnect ( subscribeRequest : SubscribeRequest ) : Promise < void > {
if ( this . isConnected ) {
console . log ( "Stream disconnected, attempting to reconnect..." );
this . isConnected = false ;
await this . reconnect ( subscribeRequest );
}
}
private async reconnect ( subscribeRequest : SubscribeRequest ) : Promise < void > {
if ( this . reconnectAttempts >= this . maxReconnectAttempts ) {
console . error ( "Max reconnection attempts reached. Giving up." );
return ;
}
this . reconnectAttempts ++ ;
const delay = this . baseReconnectDelay * Math . pow ( 2 , Math . min ( this . reconnectAttempts - 1 , 5 ));
console . log ( `Reconnect attempt ${ this . reconnectAttempts } / ${ this . maxReconnectAttempts } in ${ delay } ms...` );
setTimeout (() => {
this . connect ( subscribeRequest ). catch ( console . error );
}, delay );
}
private startKeepalive () : void {
setInterval (() => {
if ( this . isConnected ) {
const pingRequest : SubscribeRequest = {
ping: { id: Date . now () },
accounts: {},
accountsDataSlice: [],
transactions: {},
slots: {},
blocks: {},
blocksMeta: {},
entry: {},
transactionsStatus: {}
};
this . writeRequest ( pingRequest ). catch ( console . error );
}
}, 30000 ); // 30 seconds
}
disconnect () : void {
if ( this . stream ) {
this . stream . end ();
}
this . client . close ();
this . isConnected = false ;
}
}
Create main.ts: import { StreamManager } from './stream-manager' ;
import { CommitmentLevel , SubscribeRequest } from "@triton-one/yellowstone-grpc" ;
// Configuration
const ENDPOINT = "your-grpc-endpoint" ; // LaserStream or Dedicated Node endpoint
const API_KEY = "YOUR_API_KEY" ;
// USDC Token Mint for example
const USDC_MINT = "EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v" ;
async function main () {
const streamManager = new StreamManager (
ENDPOINT ,
API_KEY ,
handleAccountUpdate ,
handleError
);
// Subscribe to USDC mint account updates
const subscribeRequest : SubscribeRequest = {
accounts: {
accountSubscribe: {
account: [ USDC_MINT ],
owner: [],
filters: []
}
},
accountsDataSlice: [],
commitment: CommitmentLevel . CONFIRMED ,
slots: {},
transactions: {},
transactionsStatus: {},
blocks: {},
blocksMeta: {},
entry: {}
};
console . log ( "🚀 Starting USDC mint account monitoring..." );
await streamManager . connect ( subscribeRequest );
// Handle graceful shutdown
process . on ( 'SIGINT' , () => {
console . log ( ' \n 🛑 Shutting down...' );
streamManager . disconnect ();
process . exit ( 0 );
});
}
function handleAccountUpdate ( data : any ) : void {
if ( data . account ) {
const account = data . account . account ;
console . log ( ' \n 📊 Account Update:' );
console . log ( ` Account: ${ account . pubkey } ` );
console . log ( ` Owner: ${ account . owner } ` );
console . log ( ` Lamports: ${ account . lamports } ` );
console . log ( ` Data Length: ${ account . data ?. length || 0 } bytes` );
console . log ( ` Slot: ${ data . account . slot } ` );
console . log ( ` Timestamp: ${ new Date (). toISOString () } ` );
}
if ( data . pong ) {
console . log ( `💓 Keepalive pong received (id: ${ data . pong . id } )` );
}
}
function handleError ( error : any ) : void {
console . error ( '❌ Stream error:' , error . message );
}
main (). catch ( console . error );
Run your stream: Create src/main.rs: use yellowstone_grpc_client :: GeyserGrpcClient ;
use yellowstone_grpc_proto :: prelude ::* ;
use futures :: StreamExt ;
use std :: collections :: HashMap ;
use tokio :: time :: {sleep, Duration };
#[tokio :: main]
async fn main () -> anyhow :: Result <()> {
let endpoint = "your-grpc-endpoint" ; // Replace with your endpoint
let token = Some ( "YOUR_API_KEY" . to_string ()); // Replace with your API key
let mut client = GeyserGrpcClient :: connect ( endpoint , token , None ) . await ? ;
// USDC mint account
let usdc_mint = "EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v" ;
let mut accounts = HashMap :: new ();
accounts . insert (
"usdc_mint" . to_string (),
SubscribeRequestFilterAccounts {
account : vec! [ usdc_mint . to_string ()],
owner : vec! [],
filters : vec! [],
}
);
let mut stream = client . subscribe_once (
accounts ,
HashMap :: new (), // slots
HashMap :: new (), // transactions
HashMap :: new (), // blocks
HashMap :: new (), // blocks_meta
None , // commitment
HashMap :: new (), // accounts_data_slice
Some ( CommitmentLevel :: Confirmed ),
HashMap :: new (), // entry
) . await ? ;
println! ( "🚀 Connected! Monitoring USDC mint account..." );
while let Some ( message ) = stream . next () . await {
match message {
Ok ( msg ) => {
if let Some ( account ) = msg . update_oneof {
match account {
subscribe_update :: UpdateOneof :: Account ( account_update ) => {
println! ( " \n 📊 Account Update:" );
println! ( " Account: {}" , account_update . account . as_ref ()
. map ( | a | & a . pubkey) . unwrap_or ( & "N/A" . to_string ()));
println! ( " Lamports: {}" , account_update . account . as_ref ()
. map ( | a | a . lamports) . unwrap_or ( 0 ));
println! ( " Slot: {}" , account_update . slot);
}
_ => {} // Handle other update types as needed
}
}
}
Err ( error ) => {
eprintln! ( "❌ Stream error: {}" , error );
sleep ( Duration :: from_secs ( 1 )) . await ;
}
}
}
Ok (())
}
Run your stream: Create main.go: package main
import (
" context "
" fmt "
" log "
" time "
" github.com/rpcpool/yellowstone-grpc/examples/golang/pkg/grpc "
pb " github.com/rpcpool/yellowstone-grpc/examples/golang/pkg/proto "
" google.golang.org/grpc/metadata "
)
func main () {
endpoint := "your-grpc-endpoint" // Replace with your endpoint
apiKey := "YOUR_API_KEY" // Replace with your API key
client , err := grpc . NewGrpcConnection ( context . Background (), endpoint )
if err != nil {
log . Fatalf ( "Failed to connect: %v " , err )
}
defer client . Close ()
// Add authentication
ctx := metadata . AppendToOutgoingContext ( context . Background (), "x-token" , apiKey )
// USDC mint account
usdcMint := "EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v"
stream , err := client . Subscribe ( ctx )
if err != nil {
log . Fatalf ( "Failed to subscribe: %v " , err )
}
// Send subscription request
request := & pb . SubscribeRequest {
Accounts : map [ string ] * pb . SubscribeRequestFilterAccounts {
"usdc_mint" : {
Account : [] string { usdcMint },
Owner : [] string {},
Filters : [] * pb . SubscribeRequestFilterAccountsFilter {},
},
},
Commitment : pb . CommitmentLevel_CONFIRMED ,
}
if err := stream . Send ( request ); err != nil {
log . Fatalf ( "Failed to send request: %v " , err )
}
fmt . Println ( "🚀 Connected! Monitoring USDC mint account..." )
for {
response , err := stream . Recv ()
if err != nil {
log . Printf ( "❌ Stream error: %v " , err )
time . Sleep ( time . Second )
continue
}
if account := response . GetAccount (); account != nil {
fmt . Printf ( " \n 📊 Account Update: \n " )
fmt . Printf ( " Account: %s \n " , account . Account . Pubkey )
fmt . Printf ( " Lamports: %d \n " , account . Account . Lamports )
fmt . Printf ( " Slot: %d \n " , account . Slot )
fmt . Printf ( " Timestamp: %s \n " , time . Now (). Format ( time . RFC3339 ))
}
}
}
Run your stream:
Test Your Stream
Run your application and verify it’s working:
Start your stream using the command for your language
Look for connection confirmation in the console
Wait for account updates - you should see periodic updates to the USDC mint account
Test reconnection by temporarily disconnecting your internet
Verify keepalive by watching for pong messages every 30 seconds
Expected output: 🚀 Connected! Monitoring USDC mint account...
✅ Connected and subscribed successfully
💓 Keepalive pong received (id: 1703123456789)
📊 Account Update:
Account: EPjFWdd5AufqSSqeM2qN1xzybapC8G4wEGGkZwyTDt1v
Owner: TokenzQdBNbLqP5VEhdkAS6EPFLC1PHnBqCXEpPxuEb
Lamports: 1461600
Data Length: 82 bytes
Slot: 275123456
Timestamp: 2024-01-15T10:30:45.123Z
What’s Next?
Now that you have a working gRPC stream, explore these monitoring guides:
Account Monitoring Advanced account filtering and data slicing techniques
Transaction Monitoring Stream transactions with program filtering and execution details
Slot & Block Monitoring Monitor network consensus and block production
Stream Pump AMM Data Real-world example: monitor DeFi protocol data
Troubleshooting
Symptoms: Connection timeouts, authentication errorsSolutions:
Verify your endpoint URL and API key
Check if your plan includes gRPC access
Ensure you’re using the correct port (typically 2053 for Dedicated Nodes)
For LaserStream devnet, you need at least a Developer plan. For mainnet, you need at least a Business plan
Symptoms: Stream connects but no account updates appearSolutions:
USDC mint updates are infrequent - try monitoring a more active account
Check your commitment level (try PROCESSED for more frequent updates)
Verify your account filter configuration
Monitor a token account instead of the mint for more activity
Symptoms: Frequent disconnections, reconnection loopsSolutions:
Implement exponential backoff (included in examples above)
Check network stability
Ensure keepalive pings are working (every 30 seconds)
Monitor server-side rate limits
Best Practices
Production Readiness Checklist:
✅ Implement exponential backoff for reconnections
✅ Use keepalive pings every 30 seconds
✅ Handle all stream events (data, error, end, close)
✅ Process data asynchronously to avoid blocking
✅ Monitor connection health and alert on failures
✅ Use appropriate commitment levels for your use case
✅ Filter data as specifically as possible to reduce bandwidth