Back to Blog
October 30, 2024

Sealevel internals: how Solana actually parallelizes transactions

A developer's guide to Solana's transaction processing pipeline. From Gulf Stream to Sealevel to Turbine, and where parallelism actually happens.

The transaction lifecycle

When you call sendTransaction, your transaction doesn't immediately execute. It enters a multi-stage pipeline:

The banking stage scheduler

The banking stage is the critical piece for parallelism. It maintains a priority queue of pending transactions and a lock table of account access. When a thread becomes available, the scheduler:

This is a greedy scheduling algorithm. It doesn't look ahead or optimize globally. It just picks the next available transaction that doesn't conflict with currently running transactions. Simple, fast, good enough for validator throughput.

But "good enough for validator throughput" is not the same as "optimal for your application." The validator's scheduler operates at the block level, across thousands of transactions from hundreds of users. It doesn't know that your five transactions are related and could be structured better.

Account lock granularity

Solana's locking is at the account level, not the program level. This is important. Two transactions calling the same program but accessing different accounts can run in parallel. Two transactions calling different programs but writing to the same account cannot.

lock_examples.txt
# CAN run in parallel:
TX1: swap on Raydium pool A (writes: pool_A, user_ata_1)
TX2: swap on Orca pool B    (writes: pool_B, user_ata_2)
# Different accounts -> no conflict -> parallel

# CANNOT run in parallel:
TX1: swap USDC->SOL on Raydium (writes: user_sol_ata)
TX2: stake SOL on Marinade     (reads:  user_sol_ata)
# Shared account with write -> conflict -> sequential

# CAN run in parallel:
TX1: read pool state (reads: pool_state)
TX2: read pool state (reads: pool_state)
# Read-read on same account -> no conflict -> parallel

The implication: if you want parallelism, you need to minimize account overlap between transactions. Specifically, minimize write overlap. Read overlap is free.

Compute units and scheduling priority

Every Solana transaction has a compute budget -- the maximum number of compute units (CU) it can consume. The default is 200,000 CU. You can request up to 1,400,000 CU with SetComputeUnitLimit.

Priority fees are denominated in microlamports per CU. A transaction requesting 100,000 CU with a priority fee of 1,000 microlamports/CU pays 0.1 SOL in priority fees. The scheduler uses this to order transactions: higher fee-per-CU goes first.

This creates an interesting optimization opportunity for parallel execution. If you split a large operation into multiple smaller transactions, each with its own compute budget, the total CU consumption might be the same. But the individual transactions are smaller, acquire fewer locks, and release them faster. This reduces contention and increases the chance of parallel execution.

The versioned transaction format

Solana v0 transactions have a hard limit of 35 accounts per transaction (due to the 1232-byte packet size limit). Versioned transactions with address lookup tables (ALTs) extend this by referencing accounts from on-chain tables, reducing the per-account overhead from 32 bytes to 1 byte.

This matters for parallelism because complex operations often need many accounts. A multi-hop swap through three pools might reference 20+ accounts. Without ALTs, you can barely fit two such swaps in one transaction. With ALTs, you have more room -- but you're still constrained by the compute budget.

The real solution is to not fight the transaction size limit. Instead of cramming everything into one transaction, split into multiple transactions that each fit comfortably. Let Sealevel parallelize them. This is exactly what IVZA's scheduler does.

Where the ecosystem is headed

Solana's core team has been iterating on the scheduler. The new scheduler implementation improves fairness and reduces CU waste from failed lock acquisitions. Upcoming changes to the banking stage will make the scheduler even more parallelism-friendly.

On the application side, the trend is toward intent-based architectures. Jupiter already abstracts swap routing. Jito abstracts bundle submission. The missing piece is transaction structuring -- taking a complex operation and automatically producing the optimal set of parallel transactions.

Sealevel gave us parallel execution at the VM level. Jito gave us atomic bundles. What's needed now is the middleware that connects application intent to optimal execution structure. That's what we're building.

See how IVZA structures transactions for parallelism.