Proposal Title: Research to inform a builder support strategy
Proposal Type: Growth
Summary
As Scroll seeks to grow, it’s critical to understand why builders and entrepreneurs choose to develop on a specific ecosystem and what the success factors are.
Critical choices for designing a grant program, incubation, and/or builder support systems depend on these insights to avoid wasting money and maximise impact.
To deliver these insights and inform Scroll’s approach to ecosystem development, we’ll:
- Gather the lessons learned in previous research and by program managers on grant and builder support programs.
- Collect insights from Labs and Foundation: engage with Labs and the Foundation to collect and synthesise their learnings and insights.
- Conduct user research with builders to understand support gaps (covering research gaps identified in phase 1).
- Present findings via a report focused on lessons for Scroll and organise a live event to disseminate the findings.
- Conduct a workshop (inviting Delegates, Foundation, and Labs) to define Scroll’s framework to builder support based on the findings.
- Summarise the output in document format, memefication, and retrospective: findings formated as a document and framework (further details below) for selecting and informing builder support programs in Scroll. Including memefying key insights for easy consumption, documenting learnings from this initiative, and a final NPS survey and asynchronous retrospective on the process.
The ultimate output of this proposal is data-driven, high-level RFPs of the type of builder support programs that would make sense for Scroll and a builder support program evaluation framework. The outcome of which is an informed, and well-targeted strategy that provides scroll maximum ROI on ecosystem development activities.
Duration: 4-5 months, with insights shared from month 2-3 (see estimated timeline section).
Budget: 60,264 Scroll tokens (including interviewee incentives). Or 78,344 including a 30% volatility buffer (to be held byu the foundation and returned to the DAO if not needed).
Key Stakeholders: builders, grant program managers, and Scroll’s Delegates, Foundation, and Labs.
Motivation
Scroll counts with limited resources to advance its vision. If these resources are used poorly, Scroll will fail. In contrast, identifying highly effective ways to support builders would provide a key advantage. There’s already significant knowledge on builder support but this knowledge is often outdated, fragmented (e.g. reports focusing only on grant programs instead of a more holistic framing of builder support), and scattered across the individual experiences of builders and program managers.
We aim to consolidate insights and advance primary research to provide high-quality and highly relevant insights so Scroll can support builders effectively and cost-efficiently.
TDLR, this proposal is for Scroll to avoid wasting millions of dollars on the wrong activities and failing to attract and retain builders. Having a data-led design of programs and strategy will instead enable Scroll to focus on high-leverage activities and become a leading player.
Disclosure of interest: RnDAO is an active delegate in Arbitrum, Scroll, and zkSync. Additionally, we run builder support programs and research ecosystem development and collaboration challenges. As such, we’re particularly curious about the conclusions of this research and plan to leverage them to inform future proposals.
Execution
Beyond what pure desk research or a survey can accomplish, this initiative will provide Scroll DAO and affiliated entities with
- Summary of builder support options already tested in web3 and lessons learnt (beyond only grant programs)
- A deep understanding of builders decision-making models and current support gaps
- A framework for selecting and informing the design of builder support programs
Operational overview:
1. Gather previous research and the lessons learned by program managers
Key research questions:
- What has been tried to support builders in Web3?
- What led to the un/successful cases for builders?
- What led to un/successful cases for ecosystems?
- Which lessons should we keep to improve the efficiency and effectiveness of programs?
Activities:
- Map the types of builder support programs.
- Gather, review, and synthesise previous datasets, research on grant programs (e.g. State of Web3 Grants Report, RnDAO’s research on 'Web3 aspiring entrepreneurs’, etc.) and other forms of research on builder support programs.
- Conduct 10 interviews with Builder Support Program managers (current or having previously run a program, focus on gaps of previous research).
Methodology:
- Desk research
- User research: in-depth interviews
2. Collect insights from Labs and Foundation
Key research questions:
- What are the key strategic considerations by Labs and Foundation?
- What have Labs and Foundation already tried to support builders and which lessons were learnt?
3. Conduct user research with builders
Key research questions:
- Why do builders pick a specific ecosystem?
- What type of support do builders want?
- What unaddressed needs do builders have?
- What types of builders are there (builder personas)?
- How do builders view Scroll?
Activities:
We will:
- conduct an outreach program to connect with builders.
- A short form will allow them to register for the research and will also allow us to segment the applicants and ensure coverage of the desired dimensions.
- We will conduct 40 interviews with protocol and dApp builders (founders and technical leads). See FAQ for justification on number of interviews.
- Builders having completed the interviews will be rewarded (see interviewee incentives in the budget breakdown).
RnDAO recommended ecosystems to cover:
- Scroll
- Solana
- Base
Additional options for the ecosystem:
- zkSync
- Optimism
- Celo
- Arbitrum
- Solana
- Starknet
- Polygon
- Polkadot
- Avalanche
Builders selection: include at least 5 interviewees in each of the following labels
- Received support
- Grant recipients
- Having joined/currently in a builder support program (outside of grants)
- Building in an ecosystem without having joined an official program
- Funding
- VC funded
- Other sources of funding (angel, debt, revenue-based financing, etc).
- Web3 grants recipients (includes Gitcoin rounds)
- Hasn’t raised funding
- Special circumstances
- Top hackathon participants/ winners
- Moved to another ecosystem
Methodology:
Voice of Consumer with Jobs To Be Done Framework. Including live interviews with all participants that focus on the following topics:
- Participant background and projects
- Specific project demands
- Participant decision-making process around ecosystem and technology
5. Present finding
The deliverables below are intermediate deliverables that will be transformed into an actionable framework to assess builder support programs in steps 5 & 6.
Deliverable 1:
We’ll compile a comprehensive report with the findings, including
- Executive summary
- Methodology overview
- Builders database
- Detailed research findings, including
- builder personas
- ecosystem decision-making factors
- existing builder support programs mapping
- builders needs gap analysis
- learnings from previous research and primary research on programs
- Actionable recommendations tailored to Scroll’s ecosystem development strategy.
Deliverable 2:
Following the report, we’ll host a live Q&A event to further disseminate the learnings, and make ourselves available for up to 3 hours of chat conversation.
6. Define Scroll’s framework/approach to builder support:
Key Questions:
- What type of builder support programs should Scroll consider?
- What recommendations and key design principles should be taken into account for these programs?
Activities:
We’ll facilitate a workgroup with Delegates, Labs and Foundation members to develop a framework and approach to builder support in Scroll.
Methodology:
Focused work sessions by the team and a variety of stakeholder interactions, including:
- Live workshops (minimum 2, 1-2 hours each)
- Group chat engagement for up to 3 weeks to gather input and feedback
- Async workshops
Step 4 will include a review and alignment with outputs from related initiatives, including Strategy from Scroll’s Lab and Foundation and outcomes of Strategic Deliberation Process.
Deliverable 3:
2 workshops to gather input and feedback
7. Summarise the output in document format
Deliverable 4:
we’ll document and share the outputs from this initiative into a comprehensive document, including:
- High-level builder support framework: types of programs and considerations about said programs
- Research insights (from phase 3).
- Evaluation grid for Builder Support Program Proposals (example to be populated with research findings)
- Suggested next steps and additional recommendations, including high-level RFPs of builder support programs (for DAO, Labs or Foundation to action)
- Lessons learnt from this initiative
This step also includes memefying key insights for easy consumption and running a retrospective and NPS survey on the initiative.
Additionally, as per the foundation’s request, we’ll leverage the learnings from this initiative to work with the Foundation (and interested delegates) in mapping what an ongoing user research program (that’s not dependent on a single service provider) can look like.
Personnel & Resources
This proposal is led by RnDAO.
Daniel Stringer, PhD (project lead and user researcher): User Researcher at RnDAO. Daniel has over 15 years of experience leading user research studies and teaching companies to use human-centred design in their operations at Facebook, The World Bank, Google and other organisations.
https://www.linkedin.com/in/danieltheory/
Mercedes Rodriguez (coordination, outreach, and research support): Operations Manager and community builder with over 7 years of experience in strategic operations, team leadership, and project management within humanitarian and the web3 space. Co-Founder of the Ethereum Venezuela Community. Previously Researcher and Operations Manager at El Dorado. YLAI 2025 fellow and Ethereum Foundation Next Billion Fellow.
https://www.linkedin.com/in/mecherodsim
Maya Caddle (analyst): Expert in market expansion and launches. Previously the general manager and Product Lead of Onboard (now a global Coinbase partner) and took it from a high level idea to tens of millions in liquidity. She also led a tech unicorn’s expansion into MENA.
https://www.linkedin.com/in/maya-caddle/
Andrea Gallagher (research planning oversight): Drea is the research lead at RnDAO. Having led research teams since Web1 and all the way to Web3, including being research lead at Google Suite, Asana and Aragon, and as an innovation catalyst at Inuit (Quickbooks).
https://www.linkedin.com/in/andreagallagher/
Daniel Ospina (stakeholder management support): Instigator at RnDAO. Previously Supervisory Council at SingulairtyNET and Head of Governance at Aragon, consulted on system design and innovation methodology for Google for Startups, BCG, and Daymler. HBR author.
https://www.linkedin.com/in/conductal/
Additional team members will be involved as needed, including RnDAO’s comms and outreach team in the sourcing of participants.
Financial
The Foundation will receive the funds, review the quality of deliverables, and execute (or withhold) payments to RnDAO according to the payment schedule.
Estimated Timeline
- From kick-off to completion of Step 4 (Deliverables 1 & 2 i.e. presentation of findings): week 0-11
- Completion Step 5 (Deliverable 3 i.e. Framework development workshop): week 12-14
- Completion Step 6 (Deliverable 4 i.e. Final report): week 14-16
If the initiative is not completed within 6 months from kick-off, the foundation can decide to cancel or modify subsequent payments.
Evaluation
Ultmate goal
The research findings define the selection and design of builder support programs (using evaluation checklist, research findings, and programs framework), leading to:
- Proposals created for Builder Support Programs
- Concrete initiatives funded by the DAO/Foundation/Labs
- Changes to initiatives based on research findings
NPS score amongst delegates, Foundation, and Labs for the research.
Leading indicators
- % of participation in workshops (tokens and individuals) from the top 100 delegates
- Read time and opne rate of research report(s) - calculated with docsend
Conclusion
This research initiative helps Scroll understand how to better support builders. By conducting interviews with builders and program managers, the project will:
- Discover why builders choose specific ecosystems
- Identify what support builders need
- Create builder personas
- Learn lessons from existing support programs
The end goal is to develop a clear, effective framework for Scroll’s builder support strategy. This will help Scroll make smarter decisions about grants, incubation, and support programs, ultimately maximizing the impact of their resources and attracting more talented builders to their ecosystem.
The project will result in a comprehensive report, a live event to share findings, and a strategic framework for future builder support initiatives.
FAQ
Value to Scroll
It’s not possible to gate such findings in a DAO. However, the research is tailored to the needs of Scroll (including specific research questions on Scroll perception by builders and a focus on Scroll level of maturity as an ecosystem). Also the live events (Q&A and workshops) where we’ll dive deeper than the report allows, are only for the Scroll ecosystem. The ultimate value is derived from quickly translating research findings into initiatives, and here the workshops will provide a dedicated forum to make this happen in Scroll. Additionally, RnDAO plans to continue engaging in Scroll to advance builder support programs.
Ideally, other ecosystems will co-fund research efforts on understanding the needs of builders. However, the research needs of each DAO are different and that would result in unfocused scope that ends up being superficial and hence reduced ability to take action based on the findings. Also, the complexity of selling to DAOs and to multiple partners would delay this initiative and make it unviable (too high cost of sales for low fees that a service provider can charge). RnDAO might propose distinct but complementary initiatives to other ecosystems; in the places we’re delegates we’ll strive to reduce duplication of work.
Research data
The interview recordings and transcriptions will be anonymised after the analysis is completed by the RnDAO research team. Only aggregated findings and conclusions will be shared publicly to ensure the anonymity of research participants.
Participants who opt-in will be added to a database with segmentation data publicly available (e.g. stage of maturity of project, region, having participated in programs, ecosystem, funding, etc.)
Why a research proposal and not a research council or workstream
We envision the creation of a research workstream, with multiple providers, that can continually inform Scroll’s functioning. Building this capability will take time and also requires validating the value of such activities. This research initiative serves as an initial step, designed to quickly provide findings for Scroll, so we can serve the strategy development of the ecosystem. In parallel or sequentially, a more permanent system for research and strategy-making can be scoped and developed.
Will this delay Scroll from executing?
Labs is already operating two educational programs, a 6-week incubator and an 8-week builders residency (IRL). As such, Scroll is already offering multiple builder support activities. This proposal will enable us to refine our understanding of which strategies work and be prepared to scale the right programs and create the right synergies between them.
Why 40 builder interviews?
We need enough interviews to be able to cover a variety of options across dimensions.
The 4 dimensions selected are (pending refinement during the Research Planning phase):
- Program participation: grants, other programs, no program
- Funding: fundraised from VC, fundraised from angels, not fundraised
- Migrated to another ecosystem: yes, no
- Ecosystem: Scroll, Solana, Base
We’re roughly projecting this coverage to ensure we tick all boxes:
- 30 interviews with roughly equal participation between options of program participation dimension.
- 5 who migrated to another ecosystem.
- And then 10 from solana and 10 from base.
- And equal representation from fundraising from different sources.
In practice, we’ll segment builders before interviews but we’re highly dependent on the individual journeys they have taken.
Statistical significance of findings?
This type of foundational research is generally aimed at exploring a new domain to understand the landscape. We’re not yet at the stage to test the statistical significance of hypotheses.
We could use a survey to quantify a specific insight based on the proposed research. If this is needed, we can contract it directly with the Foundation to avoid delays.
Why $75 per interviewee?
We’re following the guideline set by the Scroll Foundation here.