Examining the Based Sequencing Spectrum

Examining the Based Sequencing Spectrum

Tags
preconfirmationssequencing
Author
Jonas Bostoen
Product
Preconfirmation & Sequencing
Publish on
May 8, 2024
Read Time

18 Minutes

icon
Abstract The introduction of based sequencing once again highlights the crucial position of the Ethereum L1 proposer. However, how do we reconcile the high sophistication requirements of providing preconfirmations on rollups with the necessary simplicity of L1 proposers? If outsourcing sequencing is the way forward, how do we classify a system as based?

Thanks to Artem Kotelskiy, Drew Van der Werff, Mamy Ratsimbazafy, Mads Mathiesen, Justin Drake, and members of the Chainbound team (Nicolas, Francesco, Paul and Lorenzo) for the review. Review ≠ endorsement :)

Table of Contents

Based sequencing

Before we get into based sequencing, let’s quickly discuss sequencing, as a whole, without diving into too much detail. In short, with the trend towards state fragmentation, the value of both atomic composability between rollup/application-specific state and the sovereignty over proprietary state is both clear and not mutually exclusive. Effective shared sequencing designs understand this and attempt to marry the two properties, but it does come with a major drawback: a shared sequencing layer has to rely on its own consensus for transaction ordering and updating the rollup contract. This completely delegates liveness to the committee operating the shared sequencer. If one is to delegate liveness for the goal of shared sequencing, the natural question is now to whom? Now let’s talk about ‘based’.

Based sequencing is a form of shared sequencing whereby the sequencing of L2 transactions is handled by a subset of the L1 proposers. In its most basic form, any L1 proposer can sequence and include an L2 batch of transactions along with their L1 block proposal. Based sequencing improves upon the external shared sequencing model by allowing sequencing between all registered rollups and the layer 1, without requiring extra consensus. We simply - or as we will come to find, not so simply - rely on the L1 proposer or based sequencer to sequence and settle rollup batches. For example, in an Ethereum context, its hardened and geographically diverse validator set is now your rollups’ sequencer set.

Having the L1 proposer as the sequencer gives us some other nice properties:

  • Liveness backed by the L1 proposer set
  • Simplicity
  • L1 <> L2 synchronous composability [1]

That said, relying on the L1 proposer to also propose L2 blocks introduces a core UX problem: users experience a minimum 12-second transaction confirmation time.

Rollups currently work around this problem via preconfirmations, a property unlocked by centralized sequencers that as the sole L2 to L1 batch posters, can guarantee an order on state. This is not the case when L1 proposers (or based sequencers) are rotated, given the L1 proposer only has a finite window to settle L2 batches. Then the question becomes: can we do the same in a decentralized context? How about in a based setting? If yes, how does this impact the role of the proposer? We tackle these questions and more while seeking to understand the full extensibility of the role of a proposer within a preconfirmation network.

Preconfirmations

Preconfirmations are commitments about future execution. They impose significant constraints on any sequencing design due to their requirements, based or not.

Let’s go over those requirements individually:

  1. Credibility: Because preconfirmations are commitments or promises about future execution, which are only settled when they land in an L2 batch on the L1, they require something to give them credibility. With a centralized sequencer, users trust the sequencer to keep their promises because there’s a trusted organization behind it that could incur reputational damage if they start breaking these promises. We can call this reputational collateral. To achieve the same in a permissionless setting, we require economic collateral. [2]
  2. A leader with state lock: a preconfirmation system that wants some flexibility in its preconfirmation types, such as preconfirmations that commit to a post-transaction state root, requires a single leader or sequencer at any point in time with a lock over the L2 state. [3] If anyone can update the state at any time, no one is in a position to give credible preconfirmations about the said state. [4]

Based Preconfirmations

In a based sequencing system, we rely on the L1 proposer to preconfirm transactions, as they are the sequencer. In this model, the first two requirements can be met through (re)staking:

  1. L1 proposers can opt-in to additional slashing conditions to assign economic weight to their preconfirmations.
  2. L1 proposers, by opting in to these additional slashing conditions, become eligible to be selected as a sequencer (and preconfirmer) at any point in the future. For example, through the L1 proposer rotation schedule. More on the specifics later.

Faults & Slashing

Collateral can only serve as a mechanism for credibility insofar as it can be slashed. [5] A preconfirmation commitment can act as a slashing device for the issuer’s collateral in case it was not honored on-chain (a fault). As in the original proposal, we outline 2 types of faults:

  • Safety faults: The transaction settled on-chain was not executed as promised.
  • Liveness faults: The preconfirmation commitment was not honored because the based sequencer failed to include it in a block.

In a world where block building is outsourced to third parties, we require a second modification to hold based sequencers responsible for not including a certain transaction. This modification requires some type of forced inclusion mechanism that the based sequencer can rely on to include any transactions they committed to. If we don’t have this, proposers opting into this system would have to build their own blocks, which is often not economical. This forced inclusion mechanism could potentially be implemented out-of-protocol through a modified MEV-boost sidecar.

The actual mechanics of the slashing device are left as an exercise to the reader. A good starting point could be using protocols like Axiom or Relic to generate and submit fault proofs and the preconfirmation commitment. For example, in the implementation by @cairoeth, they leverage a challenging mechanism where the sequencer can address any challenges by submitting a Relic transaction inclusion proof. This mechanism addresses slashing for liveness faults. Proving safety faults will be more complex because they involve state, but they should be achievable through similar mechanisms.

Based Sequencer Election

Before discussing sequencer elections, let's recap the responsibilities of the based sequencer offering preconfirmations:

  1. Accepting preconfirmation requests from users, simulating state changes, and responding with preconfirmation commitments.
  2. Updating the L2 state contract on L1 with a batch that conforms to all the commitments made.

The sequencer election rule within a based context is quite simple:

At any point in time, the current based sequencer is the next L1 proposer in the lookahead that has opted in through restaking.
Based sequencing election rule
Based sequencing election rule
icon
Proposer Lookahead For this design to work effectively, someone in the proposer's lookahead must always be capable of issuing preconfirmations. Since the proposer lookahead technically only counts for the current epoch, the chances of this increasingly dwindle as we progress through the epoch. Therefore, one drawback of this design is that we require a large portion of L1 proposers to opt-in for the best UX.

Based sequencing (in reality)

With an understanding of based sequencing and preconfirmations, let us backtrack a little bit and address how based sequencing actually works right now. To do that, we can examine Taiko.

In the based sequencing model of Taiko, the L1 proposer is primarily unaware of L2 and does not construct L2 batches. Taiko has a separate L2 mempool, where searchers compete in a just-in-time (JIT) auction to build the most valuable L2 batch. This process resembles how PBS on L1 works. These searchers bid for the inclusion of their batch to an L1 block builder, who then selects the most valuable L2 batch to include in their block. This approach maintains compatibility with existing infrastructure since we leverage the sophistication of the MEV supply chain while keeping the involvement & sophistication of L1 proposers relatively low.

Based rollup supply chain in reality (inspired by Taiko’s system architecture)
Based rollup supply chain in reality (inspired by Taiko’s system architecture)

What we can learn from this is two-fold:

  • The L1 proposer does not actively participate in the sequencing of the based rollup
  • The L2 block builders participate in a JIT auction, which means that we never know in advance who will build the winning block
icon
Note on MEV: This model has the negative side effect of L2 block builders competing to extract as much MEV as possible, just like PBS on L1. Added benefits of having a single sequencer with preconfirmations is that you a) can have private mempools, and b) can limit the time window wherein the sequencer can reorder / insert transactions due to the low latency requirement that preconfirmations have.

Incompatibilities

Before continuing, we’d like to clarify preconfirmations' incompatibilities with how based sequencing works in reality. A system that supports preconfirmations needs the following: a single sequencer or preconfirmer known ahead of time, that has a lock over the L2 state. This is fundamentally at odds with how based sequencing works in the Taiko model: the “sequencer” auction is just-in-time, and anyone can participate, meaning no one has a lock on L2 state until the auction winner is determined at block proposal time.

Based sequencing
Based preconfirmations
Sequencer election
Just-in-time (auction)
Ahead of time
L2 state access
Anyone can update (no lock)
Only the sequencer can update (lock)

Proposer Sophistication

Introducing preconfirmations to a based rollup requires moving away from a JIT auction model and necessitates heavy involvement from the L1 proposer. They go from being a passive monopolist to being an active monopolist, albeit at a cost - sophistication:

  • The L1 proposer must be sufficiently technically sophisticated to handle the associated networking and computational load and run L2 nodes. Preconfirming in a sophisticated manner is akin to block building in a more continuous time context.
  • The L1 proposer needs to be sufficiently economically sophisticated to handle the pricing of preconfirmations.

To get more specific, generating priced preconfirmations from diverse search spaces is a non-trivial task. This requires the proposer to have high computational resources and technical sophistication. In addition, the preconfirmation commitments only carry credibility up to the restaked amount of the proposer issuing it. They need to deposit more collateral if they want to put more economic weight behind their commitments. The sequencing value (yield) has to compensate for both the capital cost of stake and technical sophistication. In reality, only a small subset of L1 proposers are likely up for this task.

The UX of a based preconfirmation system heavily relies on the number of opted-in proposers, making the aforementioned based preconfirmation design somewhat untenable. This becomes more evident in scenarios where preconfirmation transactions involve both L1 and L2 states, which are only possible if the sequencer has a lock over both states (i.e., only the proposer proposing the next L1 block) [6]. Here, you need an even greater percentage of proposer opt-in.

The high-level issue of proposer sophistication results in some subset of sophisticated L1 proposers earning outsized returns, which becomes a centralization vector. This is precisely why proposer-builder separation was initially introduced. We once again find friction between the L1 proposer having a full monopoly over advancing the state of the chain and needing to keep their requirements low enough to support a decentralized validator set.

Proposers are the only entity with full credibility over state lock, yet based preconfirmations are fundamentally a ‘big-node’ job. So where does that leave us, and how do we build both an effective and viable preconfirmation solution?

The Road Ahead

Fortunately, this problem has been recognized, and a line of research has surfaced to address it. In general, most directions focus on separating validation, a task with low requirements, and state progression, a task with high requirements (this is in line with Vitalik’s endgame vision).

Execution Tickets & APS

Execution tickets introduce an in-protocol market for purchasing block proposal rights in the form of a lottery. Essentially, the role of the L1 validator is split into 2 distinct roles: the execution proposer and the beacon proposer. Execution proposers are responsible for proposing the L1 block payload and participating in a lottery to win these rights. They are overseen by beacon proposers and attesters, who vote on block validity and somewhat constrain the execution proposer through mechanisms like inclusion lists. This guarantees a reasonable amount of censorship resistance, even in likely centralized block production.

Source:

One important concept to note is that this lottery assigns proposal rights for future slots, which means that the execution proposers are known beforehand. This satisfies one of the requirements of a preconfirmation system: a leader that’s known in advance!

Another requirement was credibility. The execution proposer deposits some collateral for the block they intend to propose, which can be slashed when they violate consensus rules. Additionally, this collateral can be re-used for preconfirmation credibility through some form of restaking, or perhaps by then, they will be able to use some manifestation of PEPC.

Execution tickets are a form of attester-proposer separation (APS). There are other designs as well, like the ones outlined in this article by @barnabemonnot or this one by @ConorMcMenamin9.

Outsourced Sequencing

However, even though execution tickets fulfil all the requirements for based preconfirmations, we don’t know when (and if) they will be implemented in-protocol.

Can we achieve the same out-of-protocol for external sequencers?

In our opinion, the path forward here would be to re-use the pattern that PBS introduced by allowing based sequencers to outsource sequencing to a sophisticated third party by fully selling or delegating their sequencing rights. A basic but easy-to-overlook property is that the proposer would still be involved in the process: including a sequenced batch of transactions will be a joint effort between the sequencer and the L1 proposer. So we’re merely “downgrading” their role to just being a proposer of the sequenced block, which we believe to be more aligned.

Advantages
Disadvantages
Higher participation rate — if requirements for opting in as an L1 proposer are much lower and it generates additional revenue, many L1 proposers will choose to participate. This opens the door to more effective L1 <> L2 interoperability and better preconfirmation UX overall.
Liveness defined by proposer and sequencer - we split liveness properties between the L1 proposer and outsourced sequencer. We can still use escape hatches to deal with issues of liveness and censorship resistance, but it raises an interesting question: when does a system stop being based?
Higher economic credibility - requiring the sequencer to put up more collateral than what we can impose on an unsophisticated L1 proposer will increase the credibility of any preconfirmations made without requiring additional consensus for preconfirmation safety.
Native token utility — rollups can tokenize block space scarcity by requiring sequencers to stake their token (Fuel Labs discussed this here) An increased demand for block space will increase fees and MEV, which will, in part, go to the sequencer. This leads to two indirect benefits: (1) an increased token demand for the right to collect this value, and (2) increased sequencer competition (the more transactions they can sequence, the more revenue they can earn).

What is Based?

There are a couple of properties that are very important when discussing rollups and (based) sequencing. We will limit ourselves to these two: liveness and finality.

Liveness

Liveness refers to a system's ability to continue processing transactions and updating the L2 state even in the presence of faults (malicious or not).

Someone always ensures liveness: for rollups today, it’s mostly the centralized sequencer. However, fallback mechanisms, like escape hatches, are in place, in which case the liveness falls back to the L1.

When we consider decentralizing the sequencer, we must consider its liveness properties.

Finality

Rollup finality is the point in time at which a transaction is considered irreversible and part of the canonical rollup state.

One would think that finality on Ethereum-secured rollups would be the same as Ethereum (+ whatever time the proving mechanism introduces), but this is rarely the case in practice. Rollup sequencers can ensure faster finality if you, as a user, trust them not to equivocate on their preconfirmations. If the Ethereum L1 reorgs, the sequencer will guarantee the same settlement order, thereby providing a trusted finality gadget for L2 transactions.

However, based rollups do share finality with the Ethereum L1. For example, Taiko references the previous Ethereum L1 block hash, meaning they must reorg with the L1. We can say that Taiko inherits finality from the L1. This is a pattern that we will see with any form of L1 <> L2 composability: if an L2 transaction is conditional on some L1 state, it cannot be considered final until that L1 state is finalized.

As mentioned before, shared sequencing layers run their own consensus protocols, which may also include a finality gadget. This means they can offer faster finality than the L1 the rollup settles on, but the liveness is delegated to a committee.

Tradeoff space

An inherent benefit of vanilla-based sequencing is that L1 guarantees liveness.

In this article, we argued that L1 proposers are not sophisticated enough to offer preconfirmations today and proposed outsourcing this to a third party. This third party can either be a monopolistic sequencer for a time window (no consensus) or a BFT committee running a consensus protocol.

Let’s take a look at the tradeoff space of each:

Sequencing Mode
Preconfirmations
Liveness
Finality
Vanilla Based Sequencing
Unlikely
L1
L1
External Single Sequencer
Yes
External Single Sequencer
L1
External Committee
Yes
Committee
Finality Gadget

With this in mind, we think examining the property of ‘based-ness’ on a spectrum is better. A system that outsources sequencing and validation (external consensus) is less “based” than one that outsources just sequencing. For example, a rollup that relies on an external consensus set for finality is less based than a rollup that re-orgs with the L1. With that said, we subscribe to the belief that it is still fair to view the former system as based, as the L1 proposers still have agency on whether or not to sell their block space.

icon

In our vision of based, decentralized sequencing, we require active collaboration between the outsourced sequencer and the based proposer. The sequencer is responsible for preconfirming or sequencing L2 batches, which they hand off to the based proposer for inclusion.

Given that the L2 here relies on the L1 for canonical ordering & finality, the sequencer needs guarantees from the proposer that their batch will be included (especially at the end of a sequencing window). If this was not the case, it would be trivial for the subsequent sequencer to reorg the unsafe L2 state of the previous sequencer because that state was not yet settled on the L1. This would result in the previous sequencer being slashed for a safety fault. In practice, this will look like a credible proposer commitment about inclusion.

We are actively working on this problem and are excited to share more on it soon. As always, if this is of interest to any builders or researchers, please reach out!

Footnotes

  1. A caveat about composability: any L2 transactions that are conditional on L1 state are possible today (given that the L2 reorgs with the L1 and thus shares finality with it)! However, for the other direction of composability, the L2 needs to prove to the L1 at sequencing time that its state is correct, which requires real time proving. h/t to Justin!
  2. Some readers have wondered if validity proofs could help with preconfirmation credibility. However, a validity proof can only prove whether a given computation was executed correctly. It cannot help in scenarios where we need to verify things like ordering, which is a subjective statement.
  3. Except through the escape hatch.
  4. This changes if you add in a consensus protocol, but that again brings up the issues of latency and liveness.
  5. This is simplified. We understand that there is implicit slashing in dual-staking models.
  6. Something like state lock auctions could soften this requirement somewhat.