Scroll Sequencer Fees fund Competitive Grants Program

Title: Scroll Sequencer Fees fund Competitive Grants Program

Name: Matt UI369 - not a delegate

Primary Categories: Grant Programs, Talent Attraction
Secondary Categories: Venture Studios, Treasury Management

Your idea (in no less than 4 sentences).

Have Scroll dedicate a percentage of sequencer fees to a competitive grants program with the goal of discovering and funding “New Meta” - breakout project ideas that draw bridging activity and sell blockspace. Have competing teams of allocators operate like miniature “venture studio + Scroll devrel shops” - each competing to attract and cultivate the best builder talent with the best up and coming projects (aka New Meta) to build “Scroll-First Apps” - apps that are exclusive to Scroll, at least for some period of time.

Then, set up a vote at the end of each funding cycle to review metrics and assess results - best allocators are given a bigger chunk of the pool. Particularly poor performers can be removed and replaced by new/hungrier teams.

How will this idea help ecosystem growth (in no less than 4 sentences; at least one example of how this has gone in other ecosystems should be included).

All of these competing allocator teams are incentivized to find and fund projects that will have people bridge and buy blockspace. That increases sequencer fees which, in this model, grows the pot for everybody involved. So even though they’re competing with each other to find the best New Meta, everyone benefits from community successes because they drive sequencer revenue, which grows the pie for everyone. This uses competition to drive innovation & excellence and collaboration to drive overall program results.

The vote/review at the end of each cycle lends to visibility, accountability and community involvement which is great for ecosystem engagement and growth. If you use TCR voting that creates Scroll utility, and this is a fun and accessible path to interact with Scroll governance.

Examples

This is similar to how “Onchain Summer” worked when it drew so much attention to Base. Base produced talented devrel personalities combined with funding that magnetized developer attention.

This competitive model is currently used by Gitcoin in GG23. They contracted Grant Ships to set it up for them - elected judges use an onchain voting rubric tool to choose the top 6 Community Rounds from a pool of applicants. These rounds receive matching funds and compete with each other to collect donations and allocate their matching funds. An AI-assisted community vote determines top performers, which will be rewarded with top slots in GG24, potentially securing additional matching funds.

It was used to fund the “Web3 Gaming on Arbitrum” grants program, again through Grant Ships, and had good results as a milestone-based grants program, especially compared to other Arbitrum grant programs active at that time that experienced long delays and high levels of user frustration. A lean program with smooth user experience, high visibility (all onchain), includes community engagement (TCR votes w/public review comment), visibility and community which led to a high milestone completion rate and high marks from recipients (source)

Required budget for the idea in SCR.

130,000 SCR estimated for custom dashboard or Grant Ships implementation

Or, negotiate a deal for a cheaper build, but use Vitalik’s “Fair Fees” model described here by Owocki to distribute a portion of sequencer fees to the platform provider and capital allocators:

Fair Fees Model: (TLDR: “if projects get $N builders get $max(sqrt(1000 * N), N * 0.01)”)

Sequencer fees:
Scroll collected ~$22,000 in sequencer fees in March. If that stayed steady and 100% of this were piped into a grants program, that would be ~$120,000 in grants every 6 months.

As sequencer fees increase, more funding would become available (which, think about it, is a damn exciting flywheel for builders to get behind. Just promising to share sequencer fees as grants would grab attention).

If sequencer fees are too low, this program could be supplemented by treasury funds at first. A good goal would be $270,000 in grants every 6 months (900,000 SCR w/SCR @ $0.30) . This would fund 3 competing programs with 300K SCR each, enough for each team to pay themselves and fund several promising projects per cycle.

Who would need to be involved.

  • A representative or committee from Scroll that can make initial program decisions like what % of sequencer fees to allocate, how to coordinate a community vote or other type of feedback loop and how to choose initial allocator teams.

  • 3 or more teams of allocators willing to compete - can be assigned or elected.

  • Voters/Judges/Reviewers to assess performance of teams at the end of each cycle.

  • 1-2 Facilitators to administer the program & handle operations.

  • Ideally the Grant Ships team since there is already a platform for this, but you could roll your own.

Additional reading on Scroll, “New Meta” & Grant Ships:

Grant Ships FAQ:

Note:
Full disclosure, I am part of the DAO Masons, the team that built Grant Ships. I tried to frame this as an idea that you can pick up and run with on your own if you want - and of course we are offering a service that can set all of this up for you. We believe this competitive/pluralistic model is the way to go when it comes to grants and we think Scroll could use it to gain a competitive advantage on the L2 scene.

1 Like

This proposal seems based on confusing venture studios and scout/grant programs. These are very different things. I’d invite you to read on venture studios as they’re a VERY different model to what you’re proposing here.

Hi Daniel, this is more meant as “take inspiration from venture studios” than suggesting that any team should behave exactly like a venture studio. In fact, this mechanism is totally agnostic on the actual approach taken by the individual allocator teams so long as they find and fund effective New Meta and correspondingly perform well in the assessment rounds.

I personally think that stellar devrel programs, venture studios and scout/grant programs could have a lot in common - even though they are usually handled quite differently. What do you think?

so in essence, what you’re proposing is simply having multiple types of builder support programs?

The invitation is for Scroll to adopt the Grant Ships framework to run a competition among multiple allocation teams - each free to operate autonomously according to their own internally sourced strategy. In the Grant Ships framework we call these teams “Ships” and they operate within a governance ruleset , enforced by smart contracts, that includes assessment tied to allocation ratios in concurrent rounds.

There’s more info in the documents linked above if you’d like a better understanding of the idea.

Here’s an explanation of the Grant Ships mechanism on the Allo site: Grant Ships | Allocation Mechanism

I dont see what grant ships adds of value tbh. It seems designed for grant programs like in 2022-23, but less useful for investment programs and the like

I do generally prefer competition among capital allocators. Generally speaking, an absence of competition increases walled garden behavior and doesn’t drive budget or output efficiency. I’m curious what the broader sentiment is from the community. That said, I don’t think sequencer fees are the right funding source for something like this. I’d rather see fees go toward directly supporting token holders while grants and funding seems more the domain of treasury funds.

That’s interesting, because we designed it specifically to address the shortcomings of grants programs popular around that time. I think if you dig into the benefits of competition in general you’ll start to see it.

edit: Also the “venture studios” part may still be throwing you off. It may be better to drop that concept from the proposal and just think of these as competing grant allocation programs, including some that may be inspired by best practices from venture industries but not necessarily. The idea is to allow space for innovation among those teams and hopefully see them come up with effective models we couldn’t have designed or predicted up front.

Yes, walled garden behavior is a great way to put it. That’s a pain point we address directly - and unfortunately one that we see DAOs recreate repeatedly, especially in the early stages. The thinking is fine on the surface, but goes something like “Let’s form a monolithic grants council made up of elected individuals and hope they assemble into a functional team that makes great allocation decisions” - unfortunately that’s not what happens and it’s well documented by this point.

Sequencer fees are an interesting one, because they’re simultaneously a funding source and a key success metric. If good grant allocation decisions are being made then those fees should be going up - and if that’s also how the grants program is funded then grant funding goes up too. It would of course be possible to link funding levels to the sequencer fee metric and fund from the DAO treasury, rather than piping it in directly as I suggested originally.

I’d like to hear your thoughts on using fees also to support token holders. We’ve considered models that use a portion of sequencer fees for token buybacks or even token burns - what do you have in mind?

The core idea with the sequencer fee flywheel is to use the flow of fees to benefit participants so they are collectively incentivized to drive the flywheel. With a competitive model you would see people really working to drive that flywheel and pump sequencer fees, and that benefits the DAO. If that could also benefit token holders I think you could see some really amazing energy build behind this initiative.

Thanks for the detailed RFI, Matt @UI369. Great to see many references to other ecosystems examples.

A couple of questions:

  1. While the model is competitive, are there mechanisms or incentives in place to encourage collaboration across Ships (e.g., co-funding, resource sharing) when aligned? Or would you recommend that aligned ideas operate as a single ship?

  2. How do you see the exclusivity period being enforced in practice? Would it be formalized through agreements, onchain mechanisms, or just encouraged culturally?

  3. It is mentioned that strategic voting was an issue. What are your thoughts on ways to reduce this in future rounds - maybe blind voting, separating voting from rewards, etc?

  4. It’s great that voters had the option to leave written feedback, but it sounds like engagement there was mixed. Are there any changes you’d suggest to encourage more detailed, high-signal responses - like requiring short comments or rewarding thoughtful feedback?

  5. Additional thing that stood out in the Pilot Retrospective document was the gap between those who had useful, firsthand context and those who were actually able to vote - since many didn’t hold ARB. Could you share your thoughts on how to better capture signal from high-context contributors who might not have governance tokens?

  6. For now, it will make more sense to make a request to the DAO for a budget versus relying on sequencer fees.

Overall, thank you so much for putting this idea forward - it’s clear a lot of thought went into it. It would also be great to get a sense of the expected cost for the service you’re proposing. Looking forward to hearing more.

which are? I’m not seeing the value proposition…

The issue is thinking in terms of grants, which is largely a poor ROI format (except for few narrow use cases). That was fine for 2021-23 but we should b moving on from there

Hi Jamilya, thanks for the great questions. Here are some hopefully great answers. Let me know if you have more.

While the model is competitive, are there mechanisms or incentives in place to encourage collaboration across Ships (e.g., co-funding, resource sharing) when aligned? Or would you recommend that aligned ideas operate as a single ship?

Regarding incentives for collaboration – Grant Ships encourages innovation and experimentation. Its main function is to reinforce effective patterns over time through quality assessment combined with iteration. Perform > Assess > Adjust (repeat) If it turns out that Ships that collaborate or develop co-funding models are more effective (and the assessment results reflect that) - expect those kinds of patterns to amplify over time.

Grant Ships has been called an “evolutionary container for your DAO”. It is largely silent about the “how” of each Ship’s allocation strategy, though Scroll would provide guidelines and parameters on the “what” that all the Ships operate within (e.g. fund Scroll-first applications, look for projects likely to drive sequencer revenue etc.).

We try to not to guess which strategies will work up front - and instead rely on empowering innovating teams that experiment and “evolve” their programs into the best shape for a particular environment. We call this a meta-governance system. We provide the framework the Ships operate within, and each Ship is free to govern itself how it likes.

Another key point here is our recommendation to tie overall program funding levels to ecosystem-wide metrics. While we suggested using sequencer fees in the RFI, we can also incorporate other metrics and closely monitor recently bridged user traffic to better understand intent of new users. Are these projects actually bringing new users and driving the flywheel? Can this program take credit for rising stats? If so, then we increase overall program funding. This approach aligns program funding with shared success measures and helps put all the Ships on the same team — “a rising tide lifts all Ships”.

How do you see the exclusivity period being enforced in practice? Would it be formalized through agreements, onchain mechanisms, or just encouraged culturally?

It likely isn’t feasible to enforce exclusivity directly, but we can require funding recipients to agree to it with a checkbox agreement or onchain attestation. Legal agreements could be drafted as a deterrence as well if desired. We recommend starting with a simple agreement + honor system and if it becomes a problem, adjust from there.

Remember that the Ships have an incentive to find trustworthy builders that are likely to keep their word, and the Game Facilitator role performs KYC. If they fund a project team that betrays trust, that will impact the Ship during the end-of-round assessment. Ships can also call out their own projects and declare that they will end funding and warn off other Ships from funding them as well. This ends up working like an organic reputation system.

It is mentioned that strategic voting was an issue. What are your thoughts on ways to reduce this in future rounds - maybe blind voting, separating voting from rewards, etc?

Additional thing that stood out in the Pilot Retrospective document was the gap between those who had useful, firsthand context and those who were actually able to vote - since many didn’t hold ARB. Could you share your thoughts on how to better capture signal from high-context contributors who might not have governance tokens?

(Will answer these two together as they’re related)

These questions relate to the Assessment Module - which has its own design space. The needs of each community are different, so this part is high-touch and something we design for each client based on their unique concerns and desired outcomes.

The module is built with our versatile in-house voting protocol (Chews Protocol). The assessment module may include a combination of assessment tools we have used in the past including NFT-gated rubric-assessment votes, TCR-style votes for all or some token holders, elected assessment councils, metric analysis, and AI-assisted voter tools.

We have a few options to discourage strategic voting and encourage genuine preference expression.

We have seen that providing assessors with rubric-based voting tools, rather than more simplistic yes/no or slider-weight tools leads to more sophisticated, nuanced output by the voting body. This type of interface blunts strategic voting by using a max-votes-per-choice model. Voters are asked to provide a series of ratings on various Ship parameters, and each answer contributes a portion of their vote to that Ship. Votes are also public, so people tend not to answer these questions dishonestly to game the outcome.

Regarding Context

We are also implementing an AI-assisted voting module for the Gitcoin GG23 Grant Ships round - which requires voters to signal what they consider important - and then the AI makes recommendations and explains why a particular Ship is aligned with their values. This helps bring even a low-context voter to a mid-context position with the help of AI.

During the Arbitrum Gaming Grant Ships round, we ran the first round with a straight up Arbitrum TCR vote. As you can see in the pilot report, that excluded many high-context voters and didn’t lead to a meaningful signal related to actual Ship performance. We resolved this by airdropping a special voting token to those who were most involved in the project, holding a second vote, and weighting those two votes 50/50. This led to a more refined assessment signal.

Participants enjoyed the experience of earning vote tokens, and we used it to incentivize various actions like Ship progress reports and project updates with pictures and video.

As I mentioned, each domain is different and working with you to produce a well-crafted assessment module is a big part of what we offer. As our proposal matures, we will be learning from the delegates more about the goals and values of Scroll ecosystem and we will incorporate those as we progress.

For now, it will make more sense to make a request to the DAO for a budget versus relying on sequencer fees.

Sequencer fees are both a funding source and a key metric. We could decouple those 2 functions and use it as a metric that all Ships are invested in driving, and let the DAO treasury provide the actual funding.

We’re excited about root-level treasury routing mechanisms but also understand the realities of proposal based governance and can work around that.

It would also be great to get a sense of the expected cost for the service you’re proposing. Looking forward to hearing more.

It’s difficult to estimate cost without first determining the scope and scale of what you’d like to implement. The level of new design and customization varies based on the ecosystem. We want to have more conversations with delegates before we commit to a number.

There is also the administration and facilitation of the program once it’s live to consider. We offer training so your community can take over the program and run it in a decentralized way.

If the program is flowing enough capital, we can discuss using the Fair Fees model referenced above. Once we are closer to writing up an RFP we can give a proper quote.

Here are a few benefits of a competitive model over traditional approaches:

  • Meritocratic resource allocation - A system that rewards high-performing teams and replaces underperforming ones creates stronger accountability on more frequent cycles than a monolithic structure that requires a total organizational vote or an internal “day of reckoning” to change. Also this way has way less governance overhead to figure out who’s doing a good job.

  • Enhanced transparency and learning - Community assessment creates regular feedback loops, allowing successful strategies to be identified and shared across teams. Teams can copy what works. Nobody stays ahead for too long and everybody gets better.

  • Diversified risk and reduced capture - With multiple teams allocating funds independently, the program is more resilient against single points of failure or capture by special interests. If one team underperforms or becomes compromised, the others can continue functioning effectively and the next assessment should reveal the anomaly.

The thing is, you need talent building on the chain to create new meta that attracts users to the chain to buy blockspace. That talent needs funding so you have to give it out somehow.

Grants have been around for centuries to stimulate innovation and research and fill market gaps. It’s not in the last 2 years that they suddenly stopped working. What’s happened is that we’ve finally noticed that most DAO grant programs aren’t formulated well. They’re prone to capture, show poor or inscrutable results, waste money, and with some exceptions - are pretty bad at generating new meta. What we’re offering here is a new governance system for grants, and we believe we have a design that works.

For example, one tangible benefit we’ve noticed with our competitive model is that Ships provide better customer service to their builders when they know that the builders have a choice and that they need to attract talent to win. They are trying to win talent over by doing a good job and are actually being held accountable on their results. Not to talk trash, but that’s not something I can say about many other DAO grant programs, especially not in 2021-23.

Also, Grant Ships is a meta-governance system wrapped around multiple independent capital allocating teams. Our system takes a lot of the overhead off allocator’s shoulders. It handles all of the milestone tracking, progress reports, application processes, accountability cycles and rescues allocators from “spreadsheet hell”. This lets them focus on doing one thing well: Finding and funding talent. Freed from the overhead, they tend to do their job better.

With this model, the capital allocation talent comes in the form of individuals and teams who are confident enough to put themselves out there and allocate in public. A typical DAO grants program starts by selecting or electing a grants council to a multisig and expecting that group of individuals to self-assemble into a functional team that gives good grants. This almost never happens. Usually you end up with a team of people who played the political game best, not those who are actually good at doing the job.

And lastly (though I could say more) is that each Ship is free to innovate. If you have a great idea on how to allocate capital, there is nothing stopping you from becoming a Ship operator and testing your mettle against competing strategies. The only rule is you have distribute funds in good faith and do it in public. If the DAO sees you get exceptional results with high ROI, the assessment phase will reflect that and your program will get a bigger share of the funding pool and your strategy will amplify.

We’re always happy to talk with any delegate to get a feel for what you need and value. There’s a lot of room for customization here, and we’re trying to learn more so we can better shape the proposal for your goals. Thanks for all the questions and interest so far!

1 Like