Proposal: Research to inform a builder support strategy

Proposal Title: Research to inform a builder support strategy

Proposal Type: Growth

Summary

As Scroll seeks to grow, it’s critical to understand why builders and entrepreneurs choose to develop on a specific ecosystem and what the success factors are.

Critical choices for designing a grant program, incubation, and/or builder support systems depend on these insights to avoid wasting money and maximise impact.

To deliver these insights and inform Scroll’s approach to ecosystem development, we’ll:

  1. Gather the lessons learned in previous research and by program managers on grant and builder support programs.
  2. Collect insights from Labs and Foundation: engage with Labs and the Foundation to collect and synthesise their learnings and insights.
  3. Conduct user research with builders to understand support gaps (covering research gaps identified in phase 1).
  4. Present findings via a report focused on lessons for Scroll and organise a live event to disseminate the findings.
  5. Conduct a workshop (inviting Delegates, Foundation, and Labs) to define Scroll’s framework to builder support based on the findings.
  6. Summarise the output in document format, memefication, and retrospective: findings formated as a document and framework (further details below) for selecting and informing builder support programs in Scroll. Including memefying key insights for easy consumption, documenting learnings from this initiative, and a final NPS survey and asynchronous retrospective on the process.

The ultimate output of this proposal is data-driven, high-level RFPs of the type of builder support programs that would make sense for Scroll and a builder support program evaluation framework. The outcome of which is an informed, and well-targeted strategy that provides scroll maximum ROI on ecosystem development activities.

Duration: 4-5 months, with insights shared from month 2-3 (see estimated timeline section).

Budget: 60,264 Scroll tokens (including interviewee incentives). Or 78,344 including a 30% volatility buffer (to be held byu the foundation and returned to the DAO if not needed).

Key Stakeholders: builders, grant program managers, and Scroll’s Delegates, Foundation, and Labs.

Motivation

Scroll counts with limited resources to advance its vision. If these resources are used poorly, Scroll will fail. In contrast, identifying highly effective ways to support builders would provide a key advantage. There’s already significant knowledge on builder support but this knowledge is often outdated, fragmented (e.g. reports focusing only on grant programs instead of a more holistic framing of builder support), and scattered across the individual experiences of builders and program managers.
We aim to consolidate insights and advance primary research to provide high-quality and highly relevant insights so Scroll can support builders effectively and cost-efficiently.

TDLR, this proposal is for Scroll to avoid wasting millions of dollars on the wrong activities and failing to attract and retain builders. Having a data-led design of programs and strategy will instead enable Scroll to focus on high-leverage activities and become a leading player.

Disclosure of interest: RnDAO is an active delegate in Arbitrum, Scroll, and zkSync. Additionally, we run builder support programs and research ecosystem development and collaboration challenges. As such, we’re particularly curious about the conclusions of this research and plan to leverage them to inform future proposals.

Execution

Beyond what pure desk research or a survey can accomplish, this initiative will provide Scroll DAO and affiliated entities with

  • Summary of builder support options already tested in web3 and lessons learnt (beyond only grant programs)
  • A deep understanding of builders decision-making models and current support gaps
  • A framework for selecting and informing the design of builder support programs

Operational overview:

1. Gather previous research and the lessons learned by program managers

Key research questions:

  • What has been tried to support builders in Web3?
  • What led to the un/successful cases for builders?
  • What led to un/successful cases for ecosystems?
  • Which lessons should we keep to improve the efficiency and effectiveness of programs?

Activities:

  1. Map the types of builder support programs.
  2. Gather, review, and synthesise previous datasets, research on grant programs (e.g. State of Web3 Grants Report, RnDAO’s research on 'Web3 aspiring entrepreneurs’, etc.) and other forms of research on builder support programs.
  3. Conduct 10 interviews with Builder Support Program managers (current or having previously run a program, focus on gaps of previous research).

Methodology:

  • Desk research
  • User research: in-depth interviews

2. Collect insights from Labs and Foundation

Key research questions:

  • What are the key strategic considerations by Labs and Foundation?
  • What have Labs and Foundation already tried to support builders and which lessons were learnt?

3. Conduct user research with builders

Key research questions:

  • Why do builders pick a specific ecosystem?
  • What type of support do builders want?
  • What unaddressed needs do builders have?
  • What types of builders are there (builder personas)?
  • How do builders view Scroll?

Activities:

We will:

  1. conduct an outreach program to connect with builders.
  2. A short form will allow them to register for the research and will also allow us to segment the applicants and ensure coverage of the desired dimensions.
  3. We will conduct 40 interviews with protocol and dApp builders (founders and technical leads). See FAQ for justification on number of interviews.
  4. Builders having completed the interviews will be rewarded (see interviewee incentives in the budget breakdown).

RnDAO recommended ecosystems to cover:

  • Scroll
  • Solana
  • Base

Additional options for the ecosystem:

  • zkSync
  • Optimism
  • Celo
  • Arbitrum
  • Solana
  • Starknet
  • Polygon
  • Polkadot
  • Avalanche

Builders selection: include at least 5 interviewees in each of the following labels

  • Received support
    • Grant recipients
    • Having joined/currently in a builder support program (outside of grants)
    • Building in an ecosystem without having joined an official program
  • Funding
    • VC funded
    • Other sources of funding (angel, debt, revenue-based financing, etc).
    • Web3 grants recipients (includes Gitcoin rounds)
    • Hasn’t raised funding
  • Special circumstances
    • Top hackathon participants/ winners
    • Moved to another ecosystem

Methodology:

Voice of Consumer with Jobs To Be Done Framework. Including live interviews with all participants that focus on the following topics:

  • Participant background and projects
  • Specific project demands
  • Participant decision-making process around ecosystem and technology

5. Present finding

The deliverables below are intermediate deliverables that will be transformed into an actionable framework to assess builder support programs in steps 5 & 6.

Deliverable 1:

We’ll compile a comprehensive report with the findings, including

  • Executive summary
  • Methodology overview
  • Builders database
  • Detailed research findings, including
    • builder personas
    • ecosystem decision-making factors
    • existing builder support programs mapping
    • builders needs gap analysis
    • learnings from previous research and primary research on programs
  • Actionable recommendations tailored to Scroll’s ecosystem development strategy.

Deliverable 2:

Following the report, we’ll host a live Q&A event to further disseminate the learnings, and make ourselves available for up to 3 hours of chat conversation.

6. Define Scroll’s framework/approach to builder support:

Key Questions:

  • What type of builder support programs should Scroll consider?
  • What recommendations and key design principles should be taken into account for these programs?

Activities:

We’ll facilitate a workgroup with Delegates, Labs and Foundation members to develop a framework and approach to builder support in Scroll.

Methodology:

Focused work sessions by the team and a variety of stakeholder interactions, including:

  • Live workshops (minimum 2, 1-2 hours each)
  • Group chat engagement for up to 3 weeks to gather input and feedback
  • Async workshops

Step 4 will include a review and alignment with outputs from related initiatives, including Strategy from Scroll’s Lab and Foundation and outcomes of Strategic Deliberation Process.

Deliverable 3:

2 workshops to gather input and feedback

7. Summarise the output in document format

Deliverable 4:

we’ll document and share the outputs from this initiative into a comprehensive document, including:

  • High-level builder support framework: types of programs and considerations about said programs
  • Research insights (from phase 3).
  • Evaluation grid for Builder Support Program Proposals (example to be populated with research findings)
  • Suggested next steps and additional recommendations, including high-level RFPs of builder support programs (for DAO, Labs or Foundation to action)
  • Lessons learnt from this initiative

This step also includes memefying key insights for easy consumption and running a retrospective and NPS survey on the initiative.

Additionally, as per the foundation’s request, we’ll leverage the learnings from this initiative to work with the Foundation (and interested delegates) in mapping what an ongoing user research program (that’s not dependent on a single service provider) can look like.

Personnel & Resources

This proposal is led by RnDAO.

Daniel Stringer, PhD (project lead and user researcher): User Researcher at RnDAO. Daniel has over 15 years of experience leading user research studies and teaching companies to use human-centred design in their operations at Facebook, The World Bank, Google and other organisations.

https://www.linkedin.com/in/danieltheory/

Mercedes Rodriguez (coordination, outreach, and research support): Operations Manager and community builder with over 7 years of experience in strategic operations, team leadership, and project management within humanitarian and the web3 space. Co-Founder of the Ethereum Venezuela Community. Previously Researcher and Operations Manager at El Dorado. YLAI 2025 fellow and Ethereum Foundation Next Billion Fellow.

https://www.linkedin.com/in/mecherodsim

Maya Caddle (analyst): Expert in market expansion and launches. Previously the general manager and Product Lead of Onboard (now a global Coinbase partner) and took it from a high level idea to tens of millions in liquidity. She also led a tech unicorn’s expansion into MENA.

https://www.linkedin.com/in/maya-caddle/

Andrea Gallagher (research planning oversight): Drea is the research lead at RnDAO. Having led research teams since Web1 and all the way to Web3, including being research lead at Google Suite, Asana and Aragon, and as an innovation catalyst at Inuit (Quickbooks).

https://www.linkedin.com/in/andreagallagher/

Daniel Ospina (stakeholder management support): Instigator at RnDAO. Previously Supervisory Council at SingulairtyNET and Head of Governance at Aragon, consulted on system design and innovation methodology for Google for Startups, BCG, and Daymler. HBR author.

https://www.linkedin.com/in/conductal/

Additional team members will be involved as needed, including RnDAO’s comms and outreach team in the sourcing of participants.

Financial

The Foundation will receive the funds, review the quality of deliverables, and execute (or withhold) payments to RnDAO according to the payment schedule.

Estimated Timeline

  • From kick-off to completion of Step 4 (Deliverables 1 & 2 i.e. presentation of findings): week 0-11
  • Completion Step 5 (Deliverable 3 i.e. Framework development workshop): week 12-14
  • Completion Step 6 (Deliverable 4 i.e. Final report): week 14-16

If the initiative is not completed within 6 months from kick-off, the foundation can decide to cancel or modify subsequent payments.

Evaluation

Ultmate goal

The research findings define the selection and design of builder support programs (using evaluation checklist, research findings, and programs framework), leading to:

  • Proposals created for Builder Support Programs
  • Concrete initiatives funded by the DAO/Foundation/Labs
  • Changes to initiatives based on research findings

NPS score amongst delegates, Foundation, and Labs for the research.

Leading indicators

  • % of participation in workshops (tokens and individuals) from the top 100 delegates
  • Read time and opne rate of research report(s) - calculated with docsend

Conclusion

This research initiative helps Scroll understand how to better support builders. By conducting interviews with builders and program managers, the project will:

  • Discover why builders choose specific ecosystems
  • Identify what support builders need
  • Create builder personas
  • Learn lessons from existing support programs

The end goal is to develop a clear, effective framework for Scroll’s builder support strategy. This will help Scroll make smarter decisions about grants, incubation, and support programs, ultimately maximizing the impact of their resources and attracting more talented builders to their ecosystem.

The project will result in a comprehensive report, a live event to share findings, and a strategic framework for future builder support initiatives.

FAQ

Value to Scroll

It’s not possible to gate such findings in a DAO. However, the research is tailored to the needs of Scroll (including specific research questions on Scroll perception by builders and a focus on Scroll level of maturity as an ecosystem). Also the live events (Q&A and workshops) where we’ll dive deeper than the report allows, are only for the Scroll ecosystem. The ultimate value is derived from quickly translating research findings into initiatives, and here the workshops will provide a dedicated forum to make this happen in Scroll. Additionally, RnDAO plans to continue engaging in Scroll to advance builder support programs.

Ideally, other ecosystems will co-fund research efforts on understanding the needs of builders. However, the research needs of each DAO are different and that would result in unfocused scope that ends up being superficial and hence reduced ability to take action based on the findings. Also, the complexity of selling to DAOs and to multiple partners would delay this initiative and make it unviable (too high cost of sales for low fees that a service provider can charge). RnDAO might propose distinct but complementary initiatives to other ecosystems; in the places we’re delegates we’ll strive to reduce duplication of work.

Research data

The interview recordings and transcriptions will be anonymised after the analysis is completed by the RnDAO research team. Only aggregated findings and conclusions will be shared publicly to ensure the anonymity of research participants.
Participants who opt-in will be added to a database with segmentation data publicly available (e.g. stage of maturity of project, region, having participated in programs, ecosystem, funding, etc.)

Why a research proposal and not a research council or workstream

We envision the creation of a research workstream, with multiple providers, that can continually inform Scroll’s functioning. Building this capability will take time and also requires validating the value of such activities. This research initiative serves as an initial step, designed to quickly provide findings for Scroll, so we can serve the strategy development of the ecosystem. In parallel or sequentially, a more permanent system for research and strategy-making can be scoped and developed.

Will this delay Scroll from executing?

Labs is already operating two educational programs, a 6-week incubator and an 8-week builders residency (IRL). As such, Scroll is already offering multiple builder support activities. This proposal will enable us to refine our understanding of which strategies work and be prepared to scale the right programs and create the right synergies between them.

Why 40 builder interviews?

We need enough interviews to be able to cover a variety of options across dimensions.

The 4 dimensions selected are (pending refinement during the Research Planning phase):

  • Program participation: grants, other programs, no program
  • Funding: fundraised from VC, fundraised from angels, not fundraised
  • Migrated to another ecosystem: yes, no
  • Ecosystem: Scroll, Solana, Base

We’re roughly projecting this coverage to ensure we tick all boxes:

  • 30 interviews with roughly equal participation between options of program participation dimension.
  • 5 who migrated to another ecosystem.
  • And then 10 from solana and 10 from base.
  • And equal representation from fundraising from different sources.

In practice, we’ll segment builders before interviews but we’re highly dependent on the individual journeys they have taken.

Statistical significance of findings?

This type of foundational research is generally aimed at exploring a new domain to understand the landscape. We’re not yet at the stage to test the statistical significance of hypotheses.

We could use a survey to quantify a specific insight based on the proposed research. If this is needed, we can contract it directly with the Foundation to avoid delays.

Why $75 per interviewee?

We’re following the guideline set by the Scroll Foundation here.

12 Likes

Thanks for getting this proposal posted @danielo. Exciting to see the first proposal emerge from the co-design phase of the Co-Creation Cycle.

Overall, the Foundation gov team is excited to see a proposal emerge around user research. We want to make sure that there is some kind of systematic exploration of builder needs, as well as broader needs/problem mapping in the regions that Scroll is prioritizing. I will share this proposal with some others at Scroll in case they can mention other relevant activities.

The overall desired outcomes from our perspective are:

  • do some initial research on builders
  • set the tone for future explorations, whether done by RnDAO or others.

Some comments / questions

  • is that saying that you’ll try to get 5 grant recipients, and 5 people who ‘Have joined/currently in a builder support program (outside of grants)’, etc.?
  • I ask because I see 9 bullet points there, which would mean 45 interviews and earlier it says 40 interviews so just clarifying the gap
  1. How long will the interviews be?

  2. Just to re-state on the forum, we recognize the arguments to start off with both larger and small sample sizes. Personally, I would be open to seeing a smaller/cheaper first run, but given these things take time (4-5 months in this case) we get that this is a reasonable starting point.

Conclusion

Excited to hear other’s thoughts. I would be excited to see this go up for proposal in the first voting cycle, which is tentatively slated for January 9th, 2025. I’ll nudge other delegates to take a look as folks come back from holidays.

FYI, there will be some articles hitting the forum starting next week with some background research that may also be relevant for this work.

5 Likes

Thanks for the questions

The same person can fit multiple of these buckets at the same time (e.g. no program, VC funded, didn’t migrate, and are building in Scroll). It’s hard to know in advance (before we launch the form) what sort of mix of buckets people will have and it will be a bit of a Tetris puzzle to get the right mix. So we’re suggesting we’ll do this work and if needed go back out there to try to find extra people to have minimum 5 per bucket.

45-60 minutes is the plan. You’ll see the proposal includes 2h/interview which is enough to do some preparation and also the post-processing of the data.

2 Likes

Got it, thanks Daniel. I’ll let you know if I have other q’s come up for now and will continue to nudge delegates to chime in on the forum

2 Likes

This makes sense, and I will be voting FOR.

Rationale TLDR:

  • ~$75k for 40 interviews
  • Less than 5 months to complete
  • Sharing insights from a dozen ecosystems
  • Led by a proven contributor

A meta comment on this proposal and its eventual output: I hope things get a bit more succinct. The best proposals take the risk of excluding details that are not critical. I can appreciate the burden and incentives that weigh heavily against more concise proposals though.

3 Likes

Thank you for putting together this thorough proposal—it’s a great step toward addressing the challenges faced by builders in the ecosystem. While I appreciate the focus on research, I’d also suggest incorporating more region-specific strategies, particularly for emerging markets and Scroll’s immediate focus areas like Africa & Asia (Kenya, Malaysia, Brazil & South Korea)

Scroll’s strategy of fostering collaboration among founders could be better leveraged by creating peer-to-peer learning frameworks in these targeted regions. (For instance how did builders from Celo/ Base ecosystem collaborate in Kenya? or similar projects in similar ecosystems regardless of region.
Additionally, is this proposal solely executed by RnDAO or is it open to external contributors?
Finally would love to understand how will the research findings integrate with community input to ensure alignment with Scroll’s long-term goals? Looking forward to this!

3 Likes

I support this proposal as it addresses a need: understanding how to effectively attract and support builders in the Scroll ecosystem. The research approach and budget seem justified given the potential impact on growth and the overall level of effort for the work.

@danielo I appreciate the thoughtful methodology but wonder if we could create more immediate value by structuring the research in shorter sprints that feed directly into pilot program designs. For example, after interviewing a small cohort of builders, we could quickly test specific support mechanisms rather than waiting for the full 4-5 month research cycle to complete. This would let us validate assumptions rapidly and refine our approach based on real builder feedback, similar to how accelerator programs often iterate their offering during a cohort rather than waiting until the end.

We are in a very active time in the market and I want to make sure we seize that opportunity by attracting the best builders, quickly.

3 Likes

Thanks for your questions

This is a good point. At this early stage there’s only so much we can cover and going too narrow too soon could blindside us but we’ll definitely filter for location and make sure we include builders from Global Majority regions (Latam, Africa, Southeast Asia) and ideally from Kenya, Malaysia, Brazil & South Korea.

As for researching Celo/Base that depends on the selection of ecosystems so it’s for the delegates to decide.

This proposal is managed by RnDAO so we can ensure quality and fast execution. And we’re definitely open to collaborators and contributors!
Also, the idea is to set a first precedent for research so that in the future multiple teams can carry research initiatives .

Step 6 is all about community input. That’s where we’ll leverage the research to work with the community and develop a strategy. Said strategy will then be ratified via individual proposals (independent of this initiative).

4 Likes

We do hope to share insights sooner. We are just not promising this as an official deliverable as it’s hard to ensure quality or findings halfway through. :slight_smile:

4 Likes

Sure understood!

If what I describe matches the spirit of intent then I am good from my end to support this moving forward.

3 Likes

Looks good, especially considering the relatively small budget.

2 Likes

Thanks @danielo for the proposal. I agree with the other delegates here that this would be meaningful research, the phases are well-structured, and the team is very experienced. I’m very much looking forward to the insights.

I appreciate you’ve highlighted the value specifically for Scroll and opened a survey to pick the two ecosystems to compare, as these were my main concerns.
Regarding previous comments, I agree with @gov.borderless.eth that the researches should keep in mind the geographical diversity of the builders. I also support @Sov’s recommendation to split the interviews in groups and reiterate in the process (just from a research perspective).

Besides, I would be interested in the plans of the foundation @eugene. Is there already a grants program planned (or some other form of builders support), or is this still up to define? Just to make sure, the research goal and time frame of 4–5 months is in line with the existing timeline.

4 Likes

There are some Foundation led ideas being discussed though nothing concrete yet. We will share as soon as anything has progressed. In addition to the Foundation led ones, there will hopefully be one getting proposed from the DAO in February. This is the draft of ideas that came together from the Co-Creation Cycle.

That having been said, we need these kinds of explorations to help refine and create better focus for the grant programs over time.

To the point of the @gov.borderless.eth comment above, there will also be a lot of opportunities to get really specific in terms of regions (and as @danielo has mentioned in the past, we can also do this by sector). I don’t see an issue with starting with this more broad one as a starting point and adding more specific ones in the future.

The goal of this proposal, in my eyes, is not to be the definitive user research for Scroll DAO, as much as it is to start a program of user research. We will want to see more focused proposals in the future on specific regions or domains. If folks have ideas for such programs, let’s coordinate on getting other proposals that can be more targeted and build off of the work that happens here (assuming y’all approve it).

4 Likes

I strongly believe in this proposal, because the data obtained from it would be quite important in making some key decisions for the Scroll ecosystem in a long time.

The timeline and deliverables are quite clear too. Let’s see how it goes.

2 Likes

Thank You @danielo and thanks to everyone who contributed somehow to this proposal.

I would like to ask, For how long does the RnDAO team will be available for questions and any support regarded to the findings of this investigation process once all milestones have been met?

Another are where it would be interesting to get some clarification is on, If there is any way to adjust on the deliverables at any point of the project ? What I mean is that it is stated that insights will be shared between weeks 9-11, after that happens, If there is interest by either DAO, Foundation and or Labs to change direction on a specific objective or get deeper into one or more of those objectives is there a way to do it and if so, how would that process look like?

Thanks for your response and congratulations on the proposal.

Regards

Luis Cedeño ( Ethereum TGU)

3 Likes

Thanks for your questions

The proposal doesn’t include any formal support at that point, but we’re not going anywhere :slight_smile: We plan to continue engaging in Scroll and are happy to answer a few questions (a couple hours here and there is fine and within a 1-5 days response time). If the requests become a significant time commitment, we’d make an extra proposal to the foundation/DAO to be able to continue assisting.

We’re happy to refine the scope as we learn more i.e. we’ll accommodate slight changes within the current structure. If the changes lead to significant extra work (say more than 4-6 hours of work outside the proposed scope) we’d suggest making that a separate proposal. And as mentioned above, we plan to continue contributing to Scroll so we’re happy to develop additional proposals and continue serving the ecosystem.

TLDR: As a first project with this ecosystem we’re incentivised to provide satisfaction, so we’ll aim to do that as long as we’re not running at a loss.

2 Likes

Thanks for the thorough proposal @danielo !
Are there hypotheses for each of the key questions?
These are all critical questions to ask, and preliminary research questions are meant to be open-ended, but the findings can remain too broad.
e.g., if we ask, " * What type of support do builders want?", interviewees may give diverging answers off the top of their heads, and it is hard to get an actionable insight if we don’t have a specific angle or assumption to test

2 Likes

+1 on the regional user research. Pagoda is interested in proposing a subsequent user research in Asia. Since we have @gov.borderless.eth and other delegates from LATAM, we can do a series of research for region-specific strategies as well!

2 Likes

Hello,

I fully support this proposal and its approach. As said in the form, I think we may want also to have at least one focus on an “outlier”: For example how is it that Polkadot with their massive treasury has not managed to really engage builders? Or even if there are many builders there, how is it that they did not manage to gain traction?

3 Likes

Thank you for the question

We have done some previous research in this area (slightly different scope and now a bit outdated). So we have some hypotheses that are not documented comprehensively. This will be developed further during the research planning (included in the proposal) and refined after the desk research.

2 Likes