[FINAL] DCP 1 - State of Scroll Research

Summary

This document outlines the first pilot of the Delegates Contribution Program (DCP) which in this occasion will have a comprehensive State of Scroll research as main output, combining regional chapters and vertical deep dives to provide a clear view of Scroll’s ecosystem position globally.

A total of 18,000 USDC will be split among participants based on different requirements for the development of the research, which will also count with Scroll Foundation support by providing insights and data.

The pilot is expected to conclude by the end of May 2026.


Goals

The DCP pilot aims to:

  1. Produce a high-signal, decision-useful “State of Scroll” view across key regions and verticals
  2. Support the DAO with shared, credible ecosystem context to inform priorities and decision-making
  3. Reward contributions that combine rigor (data + sourcing) with clarity (storytelling + usable format)

Scope: “State of Scroll”

All outputs are designed to be reusable across formats (PDF, one-pagers, and social media), with the forum post serving primarily as the publication and indexing layer. The following list is indicative and may be adjusted based on data availability and strategic relevance.

Topics / Verticals (indicative, may be adjusted based on data availability and strategic relevance)

Infra

  • Transaction costs
  • Developer experience
  • Tooling
  • Security & privacy characteristics
  • Rails of access to Scroll (on/off-ramps, fiat access, wallet distribution channels, bridges, etc.)

Stablecoins & Onchain Payments

  • Existence of local stablecoins
  • Established infra/apps (wallets, payment apps, on/off-ramps, integrations)

Regulation

  • Digital assets / crypto regulatory landscape (high-level, practical implications for adoption)

Offchain Payment Mechanisms

  • Country/region-specific payment rails
  • Access to USD (or dollar exposure)
  • Financial opportunities (remittances, savings, informal FX, merchant payments, etc.)

Other Use Cases (exploratory)

  • Identity (DID)
  • Voting
  • Privacy

Note: “Other Use Cases” is exploratory and should focus on practical signals and ecosystem readiness, not theoretical overviews.

Regions (with priority emphasis)

  • LATAM (priority emphasis: Honduras)
  • Europe (priority emphasis: Poland)
  • Africa (priority emphasis: Kenya, Nigeria, South Africa)
  • Southeast Asia (priority emphasis: Malaysia, Indonesia)
  • APAC (priority emphasis: Korea, Hong Kong)

Priority emphasis indicates where deeper coverage is expected if data and time allow; it does not restrict research exclusively to those countries.

Scope guardrails: Depth will vary by vertical based on relevance and data availability. Regulation and offchain payment mechanisms should be treated as enabling context and kept concise and decision-useful. “Other Use Cases” is exploratory and may be covered as a short readiness section rather than a full deep dive.

Content Requirements (Must Include)

A) Regional Chapters (one per region)

Each regional chapter must include:

1. Socio-economic context and crypto adoption

  • Practical adoption drivers and constraints
  • A sourced snapshot of the local market context

2. Scroll presence and current footprint

  • What exists today: communities, builders, integrations, awareness
  • Frictions or gaps preventing adoption

3. Stakeholders and ecosystem map (regional)

  • Key actors relevant to Scroll adoption in the region
  • Categorized list (wallets, exchanges, onramps, infra, communities, builders, institutions, etc.)

4. Opportunity queue (actionable)

  • 3-5 prioritized opportunities (integrations, partnerships, campaigns, community strategy, events, etc.)
  • 1 higher-upside “big bet” (optional but strongly encouraged)

B) Vertical Deep Dives (one consolidated post per vertical, cross-regional)

Vertical deep dives will be produced as cross-regional syntheses: Regional contributors will include structured vertical inputs within their chapters, and the Global Data and Analytics role will consolidate those inputs into one deep dive per vertical to ensure consistency and comparability.

Vertical deep dives will be produced for the primary verticals (Infra, Stablecoins & Onchain Payments, Regulation, Offchain Payment Mechanisms). “Other Use Cases” will be covered as an exploratory section unless explicitly prioritized.

Each vertical post must include:

  1. Stakeholders and ecosystem map (per vertical)
  • Categories + projects + relationship to Scroll (current or potential)
  1. Competitors
  • Established competitors
  • Potential competitors (emerging or likely entrants)
  1. Key insights and recommendations
  • 3 insights (what we learned)
  • 3 recommendations (what Scroll should do)

C) $SCR Listings (public info only)

A dedicated section must cover $SCR listings using public information only, including:

  • Current known listing status (public)
  • Observations based on public signals
  • A neutral tone (informational, not promotional)

Final Output Structure

1) One consolidated document (single source of truth)

A single compiled artifact (PDF or equivalent) that includes:

  • Global one-pager (executive snapshot): key takeaways, global priorities, top insights, top recommendations
  • Regional chapters (full text) + regional one-pagers (executive snapshots)
  • Ecosystem Map (visual one-pager): a clean, visual overview (by vertical and/or region) that can be read in <2 minutes
  • Consolidated vertical deep dives (one per vertical), synthesized across regions
  • Links to sources (public), plus a short methodology / definitions appendix as needed

2) Social media breakdowns

A set of ready-to-post social formats derived from the report, for distribution and amplification, such as:

  • 1 general “State of Scroll” thread/post (global)
  • 1 additional thread/post focused on key recommendations (global)
  • Optional: 1 thread/post per vertical (if time allows)

These should be consistent in tone and structure, and point back to the forum post(s).

3) Forum publication (mandatory)

All deliverables must be published on the Scroll DAO forum.

  • The forum is the canonical publication venue
  • The consolidated document should be attached or linked from the forum post
  • The forum master post should index and link to: the consolidated PDF, regional chapters, and vertical deep dives (one per vertical) (as separate posts or clearly marked sections).

Confidential / Foundation-only addendum (when needed):
For competitive advantage reasons, some sections or details may be produced as a Foundation-only addendum (not published publicly) to support strategy development. Any such restriction will be decided jointly by the Operations Committee and the Scroll Foundation, and will be kept as narrow as possible, while preserving a public version that remains decision-useful for the DAO.


Roles and Compensation

The program will select the following roles. Global roles provide horizontal support across all regions and verticals, while Regional roles focus on regional chapters and structured inputs.

Global roles

Data and Analytics (Global) – $5,000

Responsibilities:

  • Support Regional Chapter contributors with the collection, cleaning, and interpretation of data across all verticals, ensuring inputs are consistent and usable.
  • Provide strategic guidance on which data best represents each vertical, including recommended proxies where direct measurement is not feasible.
  • Work closely with the Format and Visualization role to ensure data is presented clearly, accurately, and consistently across the final deliverables (one-pagers, ecosystem maps, and the consolidated report).
  • Coordinate and synthesize the cross-regional inputs into one consolidated deep dive per vertical, ensuring comparability across regions (consistent definitions, metrics, and framing), and producing a draft that can be packaged into the final report and forum posts.

Format and Visualization (Global) – $2000

Responsibilities:

  • Convert findings into clear, reusable formats: charts, ecosystem maps, tables, infographics, and one-pagers.
  • Maintain a consistent “State of Scroll” visual system (templates, layout, and formatting standards) across all outputs.
  • Package the final deliverables into a single consolidated document, including regional chapters (full text), one-pagers, and ecosystem map(s).
  • Provide lightweight social breakdown support (e.g., draft outlines, key message framing, and recommended visuals to reuse), without requiring bespoke asset production.
  • Coordinate with Data/Analytics to ensure data is accurately represented and consistently labeled across visuals and published materials.

Regional roles

Researchers and Storytellers – $2200 per region

Responsibilities:

  • Write the regional chapter with sources and a clear narrative
  • Include vertical-specific observations and stakeholder inputs (per the selected verticals) as part of the regional chapter, using the shared definitions and data spec.
  • Provide structured inputs (projects, competitors, insights, recommendations) to support the consolidated vertical deep dives coordinated by the Global Data and Analytics role.
  • Identify stakeholders and produce an ecosystem map for that region
  • Provide a prioritized opportunity queue for Scroll in that region

Total budget assumes up to 5 regional contributors (one per region listed). If fewer regions are selected, total compensation may scale accordingly.


Foundation Collaboration

The Scroll Foundation will participate by providing existing references, materials and disclosable data that may be useful inputs for the State of Scroll package. This includes information that is already available to the Foundation and can accelerate the work. However, the goal of the pilot is to deepen, validate, and expand that baseline through broader research and structured synthesis across regions and verticals.

All factual claims included in published deliverables should remain backed by public sources where applicable (links provided), with Foundation input serving as directional context and starting points.


Selection Process

A dedicated application thread will be opened on the Scroll DAO forum on March 5, 2026. Applicants must apply to one role only and include relevant background and region/vertical familiarity.

Application window: March 5–March 11, 2026 (applications close at 19:00 UTC on March 11).

Shortlisting (Program Manager): The Operations Committee will act as Program Manager to verify eligibility, ensure applications are complete, and publish a shortlist per role (including a brief summary of each candidate and links to their applications) by March 12, 2026.

Selection (Verified Delegates vote): Final selection will be made by Verified Delegates through an offchain, forum-based vote.

  • For each role, the Operations Committee will publish a separate voting post containing the shortlist and clear voting instructions.
  • Voting will be public and attributable (delegates vote by replying from their verified delegate forum account).
  • Voting method: Approval voting (each Verified Delegate may vote for one or more candidates they approve for the role).
  • Voting window: March 12–March 18, 2026 (closes at 19:00 UTC on March 18).
  • The candidate with the most approvals wins. In case of a tie, the Program Manager will extend voting for 24 hours for the tied candidates or apply a published tie-break criterion.

Results: Selected contributors will be announced on March 18, 2026.

If no suitable candidate is selected for a region, the pilot may proceed with the remaining regions.


Timeline

Proposed timeline (adjustable, designed to complete within ~8–10 weeks):

  • Week 1 (Starting March): Application process and pre-KO setup
  • Week 2: Cohort finalized, role assignments, templates + data spec published; research plan confirmed
  • Weeks 3 - 4 - 5: research and drafting (regional chapters + vertical snapshots + stakeholder maps); data collection and synthesis with Data/Analytics support
  • Week 6: first full drafts due; internal review begins (peer review + consistency pass)
  • Week 7: revisions, fact-checking, consistency pass; one-pagers and ecosystem map visuals finalized
  • Week 8: consolidated PDF compiled and delivered; forum post published to index/link the final package; lightweight social breakdown outlines prepared

The expected delivery of the research is of May 31st.


Quality Bar and Acceptance Criteria

Deliverables are considered accepted when they meet the program’s quality bar, including:

  • Submitted by the agreed deadlines
  • Follows the required scope and deliverable structure
  • Uses public sources for factual claims, with links where applicable
  • Uses consistent metrics and a comparable structure across regions and verticals
  • Includes clear insights and actionable recommendations (not just summaries)
  • Incorporates required review feedback and resolves requested revisions

Payments and Acceptance

Compensation is paid in two installments:

  • 50% at the start of the pilot, after cohort confirmation and role assignment
  • 50% at the end of the pilot, after the full scope has been delivered, published, and reviewed for acceptance

Payment conditions

  • Final payment is issued only after the work is published and marked accepted by the reviewers.
  • If any deliverable does not meet the acceptance criteria, it must be revised before it is considered complete.
  • If parts of the assigned scope remain incomplete by the end of the pilot window, acceptance may be partial and compensation may be adjusted proportionally to the accepted work.
8 Likes

Thanks for putting this together @SEEDGov

My main callout is whether the compensation levels are realistic, given the depth of research and the deliverables being requested. It might be worth narrowing the required outputs and defining what a successful outcome looks like at each role, given the budget, so expectations and quality are aligned.

6 Likes

Hi there;
Thanks to @SEEDGov for shaping this and opening the conversation. Here are some initial reactions:

  • On the selection process:

  • I think it would be beneficial to define an application template for the dedicated thread. This would ideally be accompanied by an evaluation rubric that clearly expresses the type of skills expected (so that the selection process tends to be more impartial).

  • It is not clear to me whether one person is expected per role or whether it could be shared by several. If so, should there be a cap? Or how do we avoid the compensation being diluted too much and ending up underpaying?

  • On the goals

I believe that a fourth goal should be to evaluate the pilot in terms of role definition/selection/execution.

Seek to answer questions such as:

  • What is the appropriate time cycle and payment for such a role?
  • How do we access and select the right talent?
  • What is the ideal workflow between the Program Manager and the operational roles?

If this is the DCP pilot, let’s not forget about that meta-analysis, focusing all our attention on this deliverable.

2 Likes

Currently a lot of a DAO discussions rely on partial context or personal views. A structured “State of Scroll” can give everyone a shared baseline.

What I think will make or break this is focus. The report should not try to impress with a lot of volume but rather It should highlight what actually matters for growth and SCR strength. The opportunity queues per region are where real value actually sits. If those are sharp and realistic, the DAO can act on them immediately.

I would also suggest thinking early about how this report connects to budget decisions. If this becomes a reference document but does not influence grants, partnerships, or incentives, then the effort loses power.

If executed well, I believe this could become something we update yearly and use as a strategic compass. If rushed or too broad, it risks becoming a nice document that no one will ever revisit.

1 Like

Thanks for putting RFC forward @seedgov . I like the idea of using a small budget to temp check the DCP with a “State of Scroll” report, and I think the format (one‑pagers, maps, clear next steps) is useful.

What I like

  • Low cost for a first experiment.

  • Clear goal: give delegates and teams a better picture of Scroll by region and topic.

Main worries

  • The work looks big (4 regions + many topics) for 5,000 USDC and 8 weeks.

  • It’s not clear who will write the vertical deep dives or how they are paid.

  • I’m not sure who will check the quality and “accept” the final report.

4 Likes

Thank you to @SEEDGov and the entire Operations Committee for drafting this RFC. Initiating this first pilot is a valuable step toward structured community contribution, and a path forward for delegates to contribute with real value. To ensure this pilot produces the high-quality, actionable insights the DAO needs, I would like to offer the following observations and suggestions:

  • The proposal outlines several key Vertical Deep Dives (Infra, Payments, Stablecoins, etc.), but these do not currently have a designated owner in the “Roles and Compensation” section.

    Ques: Could you clarify if these are intended to be global syntheses or if they fall under the responsibilities of the Regional Researchers? If it is the latter, we should consider if the current scope remains realistic for those roles.

I noticed a significant variance between the Data & Analytics compensation ($2,000) and the Regional Researcher compensation ($600). Currently, the Researchers are responsible for the narrative, stakeholder mapping, and the Opportunity Queue, all tasks requiring deep local context over an 8-week period. To ensure the unit economics of the program are balanced, I suggest one of the following adjustments:

  • Option A (Refine Data Scope): If the Data & Analytics role remains at $2,000, we should increase their responsibilities to include the “Vertical Deep Dives” and the creation of the global data benchmarks for the researchers. This would justify the 3x pay gap by making them the primary technical architects of the report.

  • Option B (Budget Rebalancing): Alternatively, we should consider adjusting the Data role to approximately $1,200 and reallocating the compensation differences to the Regional Researchers. This would better reflect the heavy lifting involved in sourcing local insights and ensure we attract the level of expertise required for “high-signal” reporting.

  • Building on @alexsotodigital ’s point regarding an evaluation rubric, I believe standardization is key to a professional final product.
    Ques: Will the Operations Committee provide a standardized template or “style guide” for the Ecosystem Maps and Regional Chapters? Ensuring a consistent look and feel from the start will significantly reduce the consistency pass workload in Week 7 and ensure the final PDF is a cohesive document.

  • The proposal states that final payments are issued once work is accepted by the reviewers.
    Suggestion: To provide certainty for contributors, it would be helpful to define exactly who the Acceptance Committee is (I believe it’s the OC, but providing clarity in this proposal will help both participants and the rest of the community). A clear understanding of the Accountability for the final output will help streamline the Week 6 review phase.

  • As this is a pilot program, the data we gather about the process is just as important as the research itself.

    Recommendation: I suggest we formalize a brief Post-Mortem or feedback loop in Week 8. Capturing insights on data accessibility and workload will be essential for refining the DCP framework before we scale it to larger budgets or more complex tasks.

3 Likes

Thank you all for the time, feedback, and thoughtful suggestions. We really appreciate delegates taking the time to review the RFC and help us improve it.


A few clarifications and follow-ups based on the comments:

1. Scope, compensation, and pilot expectations

We appreciate the comments on scope-to-compensation alignment. Based on this feedback, we will adjust (increase) the proposed compensation amounts and also narrow the scope of the pilot to ensure stronger execution quality and clearer ownership.

All of these adjustments will be reflected in the final version of the program.

2. Application template and evaluation rubric

We agree this is important, and we will implement it in the dedicated application thread to make the selection process as transparent and consistent as possible.

Each selected contributor will have clear ownership over their assigned outputs, and the Operations Committee (as Program Manager) will be responsible for final quality review and acceptance against the published criteria.

3. Selection structure

For clarity, this first iteration of DCP will select only one contributor per proposed role.

This means:

  • 1 Data and Analytics (Global)

  • 1 Format and Visualization (Global)

  • 4 Researcher and Storyteller (1 per region)

There will not be multiple selected contributors for the same role/region in this pilot.

4. Pilot evaluation

We also agree that evaluating the pilot itself is a good idea. In addition to the research outputs, running the DCP should help us learn what works and what should be improved in future iterations (selection process, roles, timeline, scope, and compensation design).

5. Actionability and DAO usefulness

We also strongly agree that this should be a practical and useful output for the DAO, not just a static research document. The intention is for the final package to help inform future discussions around ecosystem priorities, opportunities, and coordination.


The final version of the pilot program, including the adjustments described above, will be published on Wednesday, March 4.

9 Likes