We’re generally in favor of this proposal. We strongly agree that properly managing resources is an important problem. It’s important to look at Scroll from a meta point of view and make sure we’re attracting builders effectively and efficiently. One thing to consider reviewing is the budget - coordination & logistics will cost ~$18.4K, interviewee incentives will cost ~$3.6K, research will cost ~$49K. It’s important to have sufficient incentives across the board (both for interviewees and RnDAO) while also making sure the whole proposal is seeking a reasonable amount.
Great proposal, @danielo! The focus on understanding builder needs and crafting a support strategy for Scroll is crucial for our growth and sustainability.
On Builders’ Needs and Decision-Making:
The questions posed in this proposal are fundamental. Understanding why builders choose to develop within our ecosystem, the type of support they seek, and their unaddressed needs will allow us to tailor our strategies more effectively. I’m particularly interested in how we can create an environment where builders feel supported not just financially but through mentorship, community engagement, and technical assistance.
Region-Specific Strategies:
Echoing the comments about regional focus, it’s essential we don’t overlook the unique challenges and opportunities in different markets. This research can set the stage for more targeted initiatives that consider the cultural, economic, and technological landscapes of these regions.
Integration with Community and Future Research:
- I appreciate the plan to integrate findings with community input in Step 6. This ensures our strategies are not just top-down but reflective of our community’s collective wisdom.
- The idea of this proposal setting a precedent for ongoing research is exciting. It would be great to see how we can keep this momentum going, perhaps with collaborative efforts or additional focused studies post this initial phase.
Support and Adjustments:
-
It’s commendable that RnDAO plans to remain available for post-research support. However, having a more structured plan for how adjustments or deeper dives into certain areas could be handled might increase the proposal’s adaptability to emerging insights.
-
The willingness to refine scope as we learn more is a strong point, ensuring our research remains relevant and actionable.
Looking forward to seeing how this proposal shapes Scroll’s future in supporting our builders.
From the Foundation standpoint, 100% would love to sure region specific ones form after (maybe even concurrent to) this proposal. We have faith that RnDAO will deliver quality work and we do want to work with them in the future. That having been said, if @Pagoda @gov.borderless.eth or anyone else has suggestions for groups that might have regional expertise and the ability to conduct user research on the ground in Malaysia, Kenya, South Korea, or Brazil, please let us know (either here, in the delegate chat, or DM me).
Glad to see the conversation progressing overall and excited to have the proposal go to vote soon
Hi all,
I recently published my intended vote and wrote up my voting rationale for this proposal which you can find here:
Thank you for your comment.
As recommended by the foundation, we have put interviewee incentives at $75 per interview. We find this budget to be modest but likely sufficient
Thank you for your comments
For context, as suggested by the foundation, we have added a few hours to discuss with the foundation team and community how this can happen. The idea is that we can leverage the learnings from this pilot to develop a research framework and although this is not a full deliverable of this proposal (would confuse the scope a bit) we’ve aligned this proposal and our efforts towards making that happen
Thanks for the thorough rationale! Answering one of the comments below
I tried using the negation game but couldn’t submit a negation (ended up in a loop of statement suggestions). Anyhow, the comment is that the 2-3 months is to share research conclusions, while preliminary insights can be shared sooner. Also it’s not 2-3 months for 40 interviews. It’s 2-3 months for:
- review of previous research and 10 interviews with program managers
- review of work done by the foundation
- 40 interviews with builders
So basically it’s a more comprehensive research initiative out of which the builder interviews is only one component
adding for transparency a copy of the async Q&A we had with @Sinkas from L2Beat
Q1: You mentioned that you would gather lessons learned in previous research and from program managers. Isn’t there enough research out there that could help inform the user research you are doing? Maybe not all aspects of it, or not all the aspects related exclusively to Scroll, but some chunks should already be covered by existing research/reports.
While I understand that you won’t go into the tedious task of collecting and go through everything simply to include exactly what is covered and what your research will be about specifically, shouldn’t that affect the time spent as well as the cost? If the existing research covers a lot of surface, shouldn’t you just focus on the parts that aren’t?
A1: Thanks. We did take into account as follows:
We’ll start by summarising the existing research (and hunting down and compiling too). This is valuable for us and also for the delegates to get up to speed (scattered, unread research is useless to inform delegates’ decisions so we’ll make it digestible and create TLDRs as part of the report). Then, after this desk research phase, we’ll decide where to focus interviews. Our preliminary research shows there’s good research already on Grant programs (free cash programs without support) but limited research on other types of builder support programs (accelerators, marketing support, etc etc). So we expect to focus our interviews on these less explored areas but TBC after the desk research. There’s a vast surface to explore so I doubt we can run out of work.
Also, we have read a lot of the grant research and although there are learnings, the specific research question of what determines ecosystem choice (i.e. what attracts builders) has not been covered in anything we have found.
Q2: When it comes to the ecosystems to focus on, why have you proposed the ones you have (Scroll is obvious, but what about Solana and Base?)? You provide some additional options which is basically a rather wide net of ecosystems we could look into - why have you chosen the ones you have specifically? I just want to understand whether there’s some rationale driving your choice, if it was just random, or if it was discussed with others somewhere and I might have missed it.
A2: We propose Solana and Base because both have gotten good traction and done interesting things with builder support outside of grants (and we see that as the less explored territory that we can learn a lot from). Secondarily, both have done a bit in the regions Scroll Labs is interested in.
Q3: Wouldn’t it be helpful if we categorized the potential ecosystems by type of user support most prominently available? E.g. Arbitrum = Mostly DAO/grant program funding / Base more funding from Coinbase (?). Could be wrong or missing something here.
A3: That’s an interesting idea. Maybe we could include that in the report after we inventory builder support programs. The proposed scope doesn’t include classifying all ecosystems but we could start with the 3 selected and make the database easy to add on to by ourselves and others.
Q4: When it comes to builder interviews, perhaps it would be beneficial to try and account for survivor bias and try to locate and talk to with projects that applied for a builder support program, were denied, but then kept on building a successful project. I imagine those would be rare cases so not plenty to look into, but worth considering.
A4: great point! I don’t think these will be hard to locate when we’re paying for interviews. I’ll pass the requirement to the team to try to include a few of these (they fit into the no-program bucket so we don’t need to change anything in the formal proposal to include this).
Q5: When you write about defining Scroll’s framework for builder support, what do you mean exactly? In my mind, a framework is something we can copy and reuse in the future without necessary requiring the people who set it up. Is the deliverable you have in mind something like that? If not, perhaps it’s worth clarifying or changing the language there.
A5: I imagine an overview of the type of support programs that would be most effective. A super crude version for example would be:
-
an accelerator: funding and support for builders with an MVP but no traction yet
-
early stage investor connections: programs for pre-seed projects to connect with investores
-
community building programs: for aspiring founders to meet each other
-
etc etc
Each bucket here would identify a builder persona (maturity stage + other characteristics TBD via research) and the needs it addresses (list of needs TBD via research). Also, the framework would map how the different programs feed into each other and add up to a cohesive builder support strategy. So in a way it’s like a list of RFPs for builder support programs.
Accompanying this is a checklist, based on the research, to assess proposals for builder support programs. I put a template of how this could look like in the research proposal. Here for ease
Q6: When it comes to the cost breakdown, you have ~19k earmarked for coordination and logistics. That’s a little less than $4,000/mo (given a 5 months timeline). What sort of coordination/logistics will there be needed that will cost that much on top of the cost of the research etc?
A6: The coordination is mostly about scheduling workshops with delegates and foundation, facilitating workshops, reporting, sourcing participants, async discussions to get input/feedbakc on the framework, etc. There’s a full budget breakdown in the second sheet of the budget.
But here’s a screenshot of a filtered view.
Updated proposal
Changelog:
- selection of ecosystems updated to reflect the results of the survey
Note: recommendations about segmenting builders by geography will be taken into account.
Proposal Title: Research to inform a builder support strategy
Proposal Type: Growth
Summary
As Scroll seeks to grow, it’s critical to understand why builders and entrepreneurs choose to develop on a specific ecosystem and what the success factors are.
Critical choices for designing a grant program, incubation, and/or builder support systems depend on these insights to avoid wasting money and maximise impact.
To deliver these insights and inform Scroll’s approach to ecosystem development, we’ll:
- Gather the lessons learned in previous research and by program managers on grant and builder support programs.
- Collect insights from Labs and Foundation: engage with Labs and the Foundation to collect and synthesise their learnings and insights.
- Conduct user research with builders to understand support gaps (covering research gaps identified in phase 1).
- Present findings via a report focused on lessons for Scroll and organise a live event to disseminate the findings.
- Conduct a workshop (inviting Delegates, Foundation, and Labs) to define Scroll’s framework to builder support based on the findings.
- Summarise the output in document format, memefication, and retrospective: findings formated as a document and framework (further details below) for selecting and informing builder support programs in Scroll. Including memefying key insights for easy consumption, documenting learnings from this initiative, and a final NPS survey and asynchronous retrospective on the process.
The ultimate output of this proposal is data-driven, high-level RFPs of the type of builder support programs that would make sense for Scroll and a builder support program evaluation framework. The outcome of which is an informed, and well-targeted strategy that provides scroll maximum ROI on ecosystem development activities.
Duration: 4-5 months, with insights shared from month 2-3 (see estimated timeline section).
Budget: 60,264 Scroll tokens (including interviewee incentives). Or 78,344 including a 30% volatility buffer (to be held byu the foundation and returned to the DAO if not needed).
Key Stakeholders: builders, grant program managers, and Scroll’s Delegates, Foundation, and Labs.
Motivation
Scroll counts with limited resources to advance its vision. If these resources are used poorly, Scroll will fail. In contrast, identifying highly effective ways to support builders would provide a key advantage. There’s already significant knowledge on builder support but this knowledge is often outdated, fragmented (e.g. reports focusing only on grant programs instead of a more holistic framing of builder support), and scattered across the individual experiences of builders and program managers.
We aim to consolidate insights and advance primary research to provide high-quality and highly relevant insights so Scroll can support builders effectively and cost-efficiently.
TDLR, this proposal is for Scroll to avoid wasting millions of dollars on the wrong activities and failing to attract and retain builders. Having a data-led design of programs and strategy will instead enable Scroll to focus on high-leverage activities and become a leading player.
Disclosure of interest: RnDAO is an active delegate in Arbitrum, Scroll, and zkSync. Additionally, we run builder support programs and research ecosystem development and collaboration challenges. As such, we’re particularly curious about the conclusions of this research and plan to leverage them to inform future proposals.
Execution
Beyond what pure desk research or a survey can accomplish, this initiative will provide Scroll DAO and affiliated entities with
- Summary of builder support options already tested in web3 and lessons learnt (beyond only grant programs)
- A deep understanding of builders decision-making models and current support gaps
- A framework for selecting and informing the design of builder support programs
Operational overview:
1. Gather previous research and the lessons learned by program managers
Key research questions:
- What has been tried to support builders in Web3?
- What led to the un/successful cases for builders?
- What led to un/successful cases for ecosystems?
- Which lessons should we keep to improve the efficiency and effectiveness of programs?
Activities:
- Map the types of builder support programs.
- Gather, review, and synthesise previous datasets, research on grant programs (e.g. State of Web3 Grants Report, RnDAO’s research on 'Web3 aspiring entrepreneurs’, etc.) and other forms of research on builder support programs.
- Conduct 10 interviews with Builder Support Program managers (current or having previously run a program, focus on gaps of previous research).
Methodology:
- Desk research
- User research: in-depth interviews
2. Collect insights from Labs and Foundation
Key research questions:
- What are the key strategic considerations by Labs and Foundation?
- What have Labs and Foundation already tried to support builders and which lessons were learnt?
3. Conduct user research with builders
Key research questions:
- Why do builders pick a specific ecosystem?
- What type of support do builders want?
- What unaddressed needs do builders have?
- What types of builders are there (builder personas)?
- How do builders view Scroll?
Activities:
We will:
- conduct an outreach program to connect with builders.
- A short form will allow them to register for the research and will also allow us to segment the applicants and ensure coverage of the desired dimensions.
- We will conduct 40 interviews with protocol and dApp builders (founders and technical leads). See FAQ for justification on number of interviews.
- Builders having completed the interviews will be rewarded (see interviewee incentives in the budget breakdown).
Ecosystems to cover:
- Scroll
- Solana
- Base
These ecosystems were selected based on a survey, adjusted for voting power. Additional options for the ecosystem included:
- zkSync
- Optimism
- Celo
- Arbitrum
- Solana
- Starknet
- Polygon
- Polkadot
- Avalanche
Survey results:
Builders selection: include at least 5 interviewees in each of the following labels
- Received support
- Grant recipients
- Having joined/currently in a builder support program (outside of grants)
- Building in an ecosystem without having joined an official program
- Funding
- VC funded
- Other sources of funding (angel, debt, revenue-based financing, etc).
- Web3 grants recipients (includes Gitcoin rounds)
- Hasn’t raised funding
- Special circumstances
- Top hackathon participants/ winners
- Moved to another ecosystem
Methodology:
Voice of Consumer with Jobs To Be Done Framework. Including live interviews with all participants that focus on the following topics:
- Participant background and projects
- Specific project demands
- Participant decision-making process around ecosystem and technology
5. Present finding
The deliverables below are intermediate deliverables that will be transformed into an actionable framework to assess builder support programs in steps 5 & 6.
Deliverable 1:
We’ll compile a comprehensive report with the findings, including
- Executive summary
- Methodology overview
- Builders database
- Detailed research findings, including
- builder personas
- ecosystem decision-making factors
- existing builder support programs mapping
- builders needs gap analysis
- learnings from previous research and primary research on programs
- Actionable recommendations tailored to Scroll’s ecosystem development strategy.
Deliverable 2:
Following the report, we’ll host a live Q&A event to further disseminate the learnings, and make ourselves available for up to 3 hours of chat conversation.
6. Define Scroll’s framework/approach to builder support:
Key Questions:
- What type of builder support programs should Scroll consider?
- What recommendations and key design principles should be taken into account for these programs?
Activities:
We’ll facilitate a workgroup with Delegates, Labs and Foundation members to develop a framework and approach to builder support in Scroll.
Methodology:
Focused work sessions by the team and a variety of stakeholder interactions, including:
- Live workshops (minimum 2, 1-2 hours each)
- Group chat engagement for up to 3 weeks to gather input and feedback
- Async workshops
Step 4 will include a review and alignment with outputs from related initiatives, including Strategy from Scroll’s Lab and Foundation and outcomes of Strategic Deliberation Process.
Deliverable 3:
2 workshops to gather input and feedback
7. Summarise the output in document format
Deliverable 4:
we’ll document and share the outputs from this initiative into a comprehensive document, including:
- High-level builder support framework: types of programs and considerations about said programs
- Research insights (from phase 3).
- Evaluation grid for Builder Support Program Proposals (example to be populated with research findings)
- Suggested next steps and additional recommendations, including high-level RFPs of builder support programs (for DAO, Labs or Foundation to action)
- Lessons learnt from this initiative
This step also includes memefying key insights for easy consumption and running a retrospective and NPS survey on the initiative.
Additionally, as per the foundation’s request, we’ll leverage the learnings from this initiative to work with the Foundation (and interested delegates) in mapping what an ongoing user research program (that’s not dependent on a single service provider) can look like.
Personnel & Resources
This proposal is led by RnDAO.
Daniel Stringer, PhD (project lead and user researcher): User Researcher at RnDAO. Daniel has over 15 years of experience leading user research studies and teaching companies to use human-centred design in their operations at Facebook, The World Bank, Google and other organisations.
https://www.linkedin.com/in/danieltheory/
Mercedes Rodriguez (coordination, outreach, and research support): Operations Manager and community builder with over 7 years of experience in strategic operations, team leadership, and project management within humanitarian and the web3 space. Co-Founder of the Ethereum Venezuela Community. Previously Researcher and Operations Manager at El Dorado. YLAI 2025 fellow and Ethereum Foundation Next Billion Fellow.
https://www.linkedin.com/in/mecherodsim
Maya Caddle (analyst): Expert in market expansion and launches. Previously the general manager and Product Lead of Onboard (now a global Coinbase partner) and took it from a high level idea to tens of millions in liquidity. She also led a tech unicorn’s expansion into MENA.
https://www.linkedin.com/in/maya-caddle/
Andrea Gallagher (research planning oversight): Drea is the research lead at RnDAO. Having led research teams since Web1 and all the way to Web3, including being research lead at Google Suite, Asana and Aragon, and as an innovation catalyst at Inuit (Quickbooks).
https://www.linkedin.com/in/andreagallagher/
Daniel Ospina (stakeholder management support): Instigator at RnDAO. Previously Supervisory Council at SingulairtyNET and Head of Governance at Aragon, consulted on system design and innovation methodology for Google for Startups, BCG, and Daymler. HBR author.
https://www.linkedin.com/in/conductal/
Additional team members will be involved as needed, including RnDAO’s comms and outreach team in the sourcing of participants.
Financial
Budget for Scroll Research to inform a builder support strategy
This Sheet is private
The Foundation will receive the funds, review the quality of deliverables, and execute (or withhold) payments to RnDAO according to the payment schedule.
Estimated Timeline
- From kick-off to completion of Step 4 (Deliverables 1 & 2 i.e. presentation of findings): week 0-11
- Completion Step 5 (Deliverable 3 i.e. Framework development workshop): week 12-14
- Completion Step 6 (Deliverable 4 i.e. Final report): week 14-16
If the initiative is not completed within 6 months from kick-off, the foundation can decide to cancel or modify subsequent payments.
Evaluation
Ultmate goal
The research findings define the selection and design of builder support programs (using evaluation checklist, research findings, and programs framework), leading to:
- Proposals created for Builder Support Programs
- Concrete initiatives funded by the DAO/Foundation/Labs
- Changes to initiatives based on research findings
NPS score amongst delegates, Foundation, and Labs for the research.
Leading indicators
- % of participation in workshops (tokens and individuals) from the top 100 delegates
- Read time and opne rate of research report(s) - calculated with docsend
Conclusion
This research initiative helps Scroll understand how to better support builders. By conducting interviews with builders and program managers, the project will:
- Discover why builders choose specific ecosystems
- Identify what support builders need
- Create builder personas
- Learn lessons from existing support programs
The end goal is to develop a clear, effective framework for Scroll’s builder support strategy. This will help Scroll make smarter decisions about grants, incubation, and support programs, ultimately maximizing the impact of their resources and attracting more talented builders to their ecosystem.
The project will result in a comprehensive report, a live event to share findings, and a strategic framework for future builder support initiatives.
FAQ
Value to Scroll
It’s not possible to gate such findings in a DAO. However, the research is tailored to the needs of Scroll (including specific research questions on Scroll perception by builders and a focus on Scroll level of maturity as an ecosystem). Also the live events (Q&A and workshops) where we’ll dive deeper than the report allows, are only for the Scroll ecosystem. The ultimate value is derived from quickly translating research findings into initiatives, and here the workshops will provide a dedicated forum to make this happen in Scroll. Additionally, RnDAO plans to continue engaging in Scroll to advance builder support programs.
Ideally, other ecosystems will co-fund research efforts on understanding the needs of builders. However, the research needs of each DAO are different and that would result in unfocused scope that ends up being superficial and hence reduced ability to take action based on the findings. Also, the complexity of selling to DAOs and to multiple partners would delay this initiative and make it unviable (too high cost of sales for low fees that a service provider can charge). RnDAO might propose distinct but complementary initiatives to other ecosystems; in the places we’re delegates we’ll strive to reduce duplication of work.
Research data
The interview recordings and transcriptions will be anonymised after the analysis is completed by the RnDAO research team. Only aggregated findings and conclusions will be shared publicly to ensure the anonymity of research participants.
Participants who opt-in will be added to a database with segmentation data publicly available (e.g. stage of maturity of project, region, having participated in programs, ecosystem, funding, etc.)
Why a research proposal and not a research council or workstream
We envision the creation of a research workstream, with multiple providers, that can continually inform Scroll’s functioning. Building this capability will take time and also requires validating the value of such activities. This research initiative serves as an initial step, designed to quickly provide findings for Scroll, so we can serve the strategy development of the ecosystem. In parallel or sequentially, a more permanent system for research and strategy-making can be scoped and developed.
Will this delay Scroll from executing?
Labs is already operating two educational programs, a 6-week incubator and an 8-week builders residency (IRL). As such, Scroll is already offering multiple builder support activities. This proposal will enable us to refine our understanding of which strategies work and be prepared to scale the right programs and create the right synergies between them.
Why 40 builder interviews?
We need enough interviews to be able to cover a variety of options across dimensions.
The 4 dimensions selected are (pending refinement during the Research Planning phase):
- Program participation: grants, other programs, no program
- Funding: fundraised from VC, fundraised from angels, not fundraised
- Migrated to another ecosystem: yes, no
- Ecosystem: Scroll, Solana, Base
We’re roughly projecting this coverage to ensure we tick all boxes:
- 30 interviews with roughly equal participation between options of program participation dimension.
- 5 who migrated to another ecosystem.
- And then 10 from solana and 10 from base.
- And equal representation from fundraising from different sources.
In practice, we’ll segment builders before interviews but we’re highly dependent on the individual journeys they have taken.
Statistical significance of findings?
This type of foundational research is generally aimed at exploring a new domain to understand the landscape. We’re not yet at the stage to test the statistical significance of hypotheses.
We could use a survey to quantify a specific insight based on the proposed research. If this is needed, we can contract it directly with the Foundation to avoid delays.
Why $75 per interviewee?
We’re following the guideline set by the Scroll Foundation here.
Thanks for the revised proposal @danielo. I commend the initiative and will be voting FOR.
Starting with a research proposal is a good step forward. A thorough understanding of the builder audience and their motivators is key to designing impactful growth strategies.
That said, establishing recruiting criteria when selecting participants for the interviews will be key to try to curb bias. Regionality was brought up in some of the previous comments. While this is a qualitative research with 40 interviews it would be good to consider how to incorporate Scroll’s strategic regions in respondent recruiting to uncover insights if/how builder support might need to be tailored.
Thank you, Daniel, for drafting this proposal. While DAO-funded user research initiatives are relatively uncommon, we believe the ethos of this proposal aligns well with the values and priorities expressed by the Foundation during the co-creation cycle.
We have a few comments and questions regarding the proposed research:
- Clarification of Goals:
Could you clarify the overarching goal of the user research? Is it to pilot a DAO-funded grant program, create a builder support program, or address a different objective?
- Deliverables:
The proposed deliverables are described at a high level. Could you share examples of prior user research conducted by RnDAO? This will help the DAO set its expectations accordingly.
- Budget Concerns:
The hourly rate budgeted appears high. According to Glassdoor, the highest annual compensation for a User Researcher in the United States is $169,000, which equates to $81/hour. However, your proposal budgets $150/hour. Could you provide justification for this rate?
The budget also includes a 20% overhead for routine business expenses like accounting, insurance, and legal. Since these costs are unrelated to the proposal, we recommend removing them.
- Focus Ecosystems:
Should this proposal move forward, we recommend focusing on ecosystems most relevant to Scroll and the goals of the program . Specifically:
Ethereum Mainnet: As a baseline for comparison.
Comparable EVM L2 Ecosystems: ZKsync, Arbitrum, or a leading Superchain ecosystem, as measured by TVL (e.g, Optimism, Base). All of these chains have established developer grant programs and builder support mechanisms that Scroll can learn from.
Thanks to @Danielo and team for putting together this proposal. We see the value in approaching ecosystem growth from a bottom-up approach, with research being the foundation.
Especially for us whose core focus is capital allocation in DAOs, we share a similar view that providing builder support is much more than grant programs. There are other effective and unique ways of allocating capital towards supporting ecosystem growth, and we would like to see this at play in Scroll DAO. We think this proposal is, in general, a step in the right direction and would like to highlight some points:
- We appreciate the inclusion of a retrospective as part of the deliverables. We just want to clarify - at what stage will this be conducted? To us, a retrospective seems to be most valuable over a longer period when recommendations from the research have been tested and implemented by the DAO. Does this align with your thinking? And if done this way, will it impact the proposed budget given that this may extend reasonably well into the future?
Once again, thanks for putting this together.
Thank you for your questions
the goal is to define what builder support programs would make the most sense to fund. That could include a grant program of some shape or not.
The format of deliverables of this initiative will be different (different scope) but you can see some examples of previous research here (note some of these are public briefs so a bit simplified from the full report which are private):
- What is a web3 community and when is it healthy (full report)
- Decisions in DAOs - What Causes the Pain? (summary)
- Compensation in early stage teams (by a fellow we mentored but less expert than our core research team)
- Before the Proposal (by a fellow we mentored but less expert than our core research team)
- What is a DAO? Conceptual Foundations (full report)
- Top 9 Challenges of DAO Unit Success (summary)
- Why and How we SubDAO (summary)
That hourly rate is for full-time employment, which this is not. Here we’re talking about a short term service provision contract. I invite you too look into the margins that are recommended to make this form of contracting economically viable, which usually is 2-5x.
That’s because on top of employment there are many additional costs (contracting, insurance, accounting, payments processing, team management, legal registration fees, etc etc.) and also downtime.
Basically, we’re just using the standard way of providing services in Web2 and Web3, used by regular agencies, consultants, and freelancers.
Billing as in-house employees is not possible when a DAO doesn’t cover all the other costs that companies cover for employees nor provides a full-time employment contract.
And for reference, in HR, a quick rule of thumb is accounting for about an extra 50% of a salary in extra costs the company incurs and this is for 1 year + long contracts. Shorter-term contracts are more expensive as mentioned above.
The wrong conclusion here would be to assume that no service providers should be used an organisations should only use full time employees. Legions of MBAs have already gone over these dynamics time and again for millions of companies, and the conclusion remains that a mix of inhouse employees and external suppliers is most frequently the best answer from a financial and strategic perspective. But of course this needs to be looked on a case by case basis.
Here Scroll has a time imperative (people are already concerned the 2-3 months timeline is too long), there’s no previous experience in the DAO of running and managing research initiatives, it would take additional time and risks to hire a well-rounded research team (taking time away from other initatives), there would be additional HR and generally admin costs in growing headcount, etc.
Thank you, although we’re not the ones deciding this. The ecosystems have been selected by a vote of the delegates via the form that was shared.
Thank you the question
A rough calculation for the longer term impact to be visible is about 5-14 months after the proposal based on this crude calculation:
- 2 months to develop and approve builder support program proposal
- 3-12 months to execute on those pilots
Extending this contract for an additional 5-14 months (unknown actual timeline as depends on future proposals) feels impractical to us. So the retrospective has been planned for after the research has been used to develop a builder support framework (Step 6) but not waiting for future proposals and their execution. I.e. the retrospective will provide insight on the value of the outputs (research) and outcomes (use of research to create a builder support framework and perceived usefulness of framework), but assessing the longer-term impact is omitted.
That being said, we plan to stay in the ecosystem and are likely to look into an assessment of the impact when the time comes (that would give us a nice case study). Also, in Scroll compared to say Arbitrum, the foundation is setup to execute on initiatives that are important and not covered by the DAO (as could be suggested the impact retrospective is).
Cost wise, thankfully we now have tools like Harmonica, which make running asynchronous retrospectives a lot easier and cheaper than hosting a live workshop! So the retrospective is almost a negligible cost (other than 30min to design it and the time chasing people to answer it).
In this sense, if the foundation or any other party wanted to run an impact retrospective, the cost would be only 2-4 hours to chase people to answer + harmonica (currently free but say $100 max for a subscription if they decide to charge by then, note it’s also open source…).
thank you for your suggestion
We’re indeed planning to account for this. Participant selection will use a form where participants need to disclose certain segmentation data (we’ll use this to ensure we’re covering the different buckets). We plan to also include a geography question and when possible we’ll privilege participants from the regions that Scroll Labs is focusing on.
blockful supports this proposal.
We believe it will be valuable for the DAO to have its first well-structured framework for initiating grants. I think what the DAO needs at the moment is exactly this type of framework to work towards and achieve its goals.
Many of the other delegates have addressed several questions I had, but I would still like to understand more, particularly about how RnDAO and the Foundation will collaborate.
I am curious to know: 1) How will the Foundation iterate on this research? 2) Do they already have plans to implement this internally, with the DAO serving as a support? 3) How will the rest of the DAO be involved in these discussions and exchanges between RnDAO and the Scroll Foundation?
Finally, I appreciated Sov’s comment. One thing that could be particularly beneficial, especially regarding grants, is drawing insights from the Cartographers Syndicate, an initiative by @Sov. There is a lot of material, research, and tools there that could be useful around grants.
Hello @danielo we have a couple of last minute questions that we would like for you to please address before official voting kicks off tomorrow.
-
Is there a plan on what type of projects / builders are going to be included in the interviews and desk research? We believe that it would be benefitial to include builders (from the selected protocols) that are currently working in diferent line of projects (Defi, AI, Gaming, NFT’s, DEXs, etc).
-
What is the plan regarding the regions from which the builders are from? Meaning is every region going to get the same order of priority when selecting the number of people interviewed and researched on?
On another end @eugene we would like to know if at any point, Scroll has run any form of user research / benchmark similar to the one included in this proposal?
Thanks in advance for your responses as they will help us determine our final decision on the direction of our vote.
Regards.
Luis Cedeño (Ethereum TGU)
We do plan to include a variety of builders. No specific segmentation has been proposed but we’ll prioritise variety whenever possible.
Regions is not the primary criteria either as we’re looking to learn from what has been done so far and a lot of that wasn’t region-focused. That being said, whenever possible we’ll prioritise the regions Scroll Labs have selected.
Although this wasn’t directed at us, for clarity, we’ve included some work to align with the Foundation and Labs on their theories, research, hypotheses, etc. And I know Eugene has participated in research on grants before Scroll, so we’ll also be able to build on top of those foundations and not reinvent the wheel.
This research sets the foundation for both the Foundation and the DAO to propose programs based on it thanks to the framework we’ll deliver (which works as high level RFPs). The approach so far has been the DAO is given the opportunity to lead but if it doesn’t the Foundation can quickly take the lead.
I expect this to follow a similar pattern, where the DAO/Foundation can propose builder support programs with the research insights.
Additionally, this is the first project of a research program that both the foundation and DAO can develop. As such we’ve ensured we include space for a retrospective and time to help inform the broader research program based on this pilot.
The DAO will participate in the presentation of the research findings and then in the deliberation to develop the research framework. So this is not going to be just between RnDAO and the FOundation but cocreated with the community.
thanks for highlighting this. We’ll make sure to include them in the desk research.
I’m voting FOR this proposal on Agora. Although I usually prefer to keep proposal ideation separate from execution for unbiased oversight, I’m comfortable backing this initiative as our DAO is still maturing. I’ve seen Daniel’s work firsthand and trust his expertise to drive this research forward. As for the proposal itself, I don’t have much to add since I kinda joined late into the discussion, but I firmly believe that building a robust research foundation is crucial for steering our journey ahead.