Sealevel is not the EVM
Most blockchain developers come from Ethereum. The EVM processes transactions sequentially -- transaction 1 finishes, then transaction 2 starts. Global state is a single Merkle trie. Every transaction has access to everything. Simple model, easy to reason about, fundamentally doesn't scale.
Solana's Sealevel runtime is different. It was designed from day one for parallel execution. The key insight: transactions declare their account access upfront. Before a transaction runs, the runtime knows exactly which accounts it will read and which it will write. This lets the scheduler determine which transactions are independent and can run on different CPU cores simultaneously.
This is Solana's single biggest architectural advantage over the EVM. And it's massively underutilized.
How account locking works
When a Solana validator processes a block, it doesn't just run transactions in order. The scheduler examines the account access lists of all pending transactions and builds an execution schedule:
- Transactions that only read shared accounts can run in parallel -- multiple readers are safe
- Transactions that write to an account get an exclusive lock -- no other transaction can read or write that account concurrently
- Transactions that access entirely different account sets run on different threads with zero contention
This is essentially a reader-writer lock system at the transaction level. The runtime handles it automatically. Developers don't need to manage locks.
The problem is that this parallelism only works across transactions. Instructions within a single transaction still execute sequentially. And most developers pack everything into a single transaction.
The single-transaction habit
Ask a Solana developer how they build a multi-step operation -- say, swapping tokens then staking the output. Almost everyone will create one transaction with two instructions:
const tx = new Transaction(); tx.add(swapInstruction); // runs first tx.add(stakeInstruction); // waits, then runs await sendTransaction(tx);
This is correct. It works. But it's sequential by construction. Even if the swap and the stake touch completely different accounts (they don't in this case, but imagine a scenario where they do), they'll run one after another because they're in the same transaction.
The reason developers do this is atomicity. If the stake fails, the swap rolls back. That's important. But it comes at the cost of parallelism. Every operation in the transaction waits for the previous one to complete.
When atomicity isn't needed
Not every multi-step operation requires atomic execution. Consider:
- Creating an ATA + performing a swap -- The ATA creation doesn't need to be atomic with the swap. If the ATA creation fails, you just retry. If it succeeds but the swap fails, you have an empty ATA. No funds at risk.
- Multiple independent swaps -- Swapping USDC → SOL and swapping USDT → RAY on different pools. These touch entirely different accounts. They can and should run in parallel.
- Fetching pool state + computing routes -- Read-only operations that don't modify state. No conflict with anything.
In these cases, splitting operations across multiple transactions and submitting them concurrently gives you free throughput from Sealevel's parallel execution. The runtime does the hard work. You just need to structure your transactions correctly.
The Jito bundle advantage
Jito bundles give you a middle ground between single-transaction atomicity and multi-transaction parallelism. A bundle is an ordered set of transactions that execute atomically as a group -- either all succeed or all revert. But within the bundle, Sealevel can still parallelize transactions that don't share write accounts.
This is powerful. You get the atomicity guarantee (the whole operation succeeds or fails) while Sealevel parallelizes the execution of independent transactions within the bundle. It's the best of both worlds, but only if you structure the bundle correctly.
"Correctly" means: transactions that can be parallel go in the same bundle position. Transactions that must be ordered go in different positions. Getting this right manually is tedious and error-prone. Getting it wrong means you're either losing atomicity or losing parallelism.
The ecosystem gap
Solana has world-class infrastructure: Sealevel for parallel execution, Jito for bundle guarantees, Gulf Stream for transaction forwarding. What's missing is the developer tooling that bridges the gap between "I want to do X" and "here are the optimally structured transactions to do X."
Jupiter gives you optimal swap routing. Jito gives you bundle submission. Helius gives you RPC and webhooks. But nobody gives you optimal transaction structuring -- analyzing your operation, finding the parallelism, and packaging it correctly.
That's the gap we're building IVZA to fill. Not a new chain. Not a new runtime. Just a middleware layer that takes your transaction intent, analyzes the dependency graph, and outputs the maximally parallel execution plan.
Solana gave us the engine. We're building the transmission.