Scroll Financial Freedom BD Pilot (Retroactive DCP Experiment)

Summary

This proposal introduces a second iteration of the Delegate Contribution Program (DCP), exploring a retroactive, results-based incentive model for ecosystem expansion.

All Verified Delegates can act as Business Development (BD) agents, identifying and approaching tools aligned with Scroll’s vision of Financial Freedom for the 21st Century.

Important:
This proposal is intentionally framed as a conversation starter. The goal is not immediate execution, but to align Verified Delegates with Scroll Foundation and Labs on whether this direction makes sense or not.

Funding request:

  • Total: 40,000 USDT
  • Duration: 8 weeks (if approved)

Expected outcomes:

  • Initial partnership pipeline
  • Validation of retroactive funding as a coordination mechanism
  • Shared understanding of Scroll’s role in the financial tooling ecosystem

Motivation

Scroll appears to be evolving toward a more product-oriented vision (Scroll App), centered on financial freedom as a user outcome. This raises a key question for the DAO:

Should Scroll build everything internally, or should it orchestrate an ecosystem of aligned tools?

This proposal leans toward the second path.

However, BD only makes sense if Foundation and Labs are aligned with building an ecosystem of partners. If that is not the direction, this experiment should not proceed.

At the same time, I think there is a risk in the opposite direction:

Trying to build everything in isolation may slow down progress and reduce leverage.

This proposal aims to open that discussion.

Additionally, @Juansito suggested that it would be valuable to create new proposals that help the DAO collectively make sense of Raza’s vision and how it translates into concrete activities. This is an attempt in that direction.


Execution

Operational

Timeline (8 weeks):

  • Week 1: Setup (BD script, tools, alignment)
  • Weeks 2–6: BD execution
  • Week 7: Lead validation
  • Week 8: Evaluation + retroactive distribution

Core flow:

  1. Map tools across financial Levels
  2. Delegates source and approach leads
  3. Use a shared BD script + success metrics (defined by Foundation)
  4. Track leads in a shared funnel
  5. Foundation handles follow-up and closing

On BD Quality:
Quality consistency is part of the experiment:

  • Foundation defines the script
  • Verified Delegates provide a baseline of trust
  • Results determine whether this model works

The Levels System Framework + Example Leads

Level User State Example Tools / Leads Viability (1–5)
L0 – Unaware No tracking Neobanks, wallets, fiat onramps 3
L1 – Awareness Tracking Budgeting apps, expense trackers 4
L2 – Control Budget + buffer Stablecoins, savings protocols, payroll infra 5
L3 – Stability Debt + safety Credit, insurance, lending protocols 4
L4 – Compounding Investing DeFi (Aave, LSTs), robo-advisors 5
L5 – Leverage Income streams DAOs, creator tools, yield strategies 4
L6 – Freedom Systemized wealth Automation layers, AI finance tools 3

Personnel & Roles

@BD (All Verified Delegates)

  • Open participation
  • Lead sourcing and qualification
  • Compensation: retroactive, performance-based

@Coordinator / Lead

  • Manages funnel and process

@Learning

  • Evaluates experiment and produces recommendations for next Delegate Contribution Programs.

@Success (fill by someone in the Foundation)

  • Defines BD script + success metrics
  • Handles closing

Financial (If Executed)

Total Budget: 40,000 USDT

  • BD Pool: 28,000 (70%)
  • Coordinator: 4,000 (10%)
  • Learning: 4,000 (10%)
  • Buffer: 4,000 (10%)

Distribution timing:

  • Fixed roles: streamed across 8 weeks
  • BD rewards: fully retroactive at the end

Strategic outputs:

  • Signal of partner willingness
  • Identification of ecosystem gaps

Conclusion

This proposal is a lightweight experiment and coordination prompt:

  • It tests a new incentive model
  • It explores Scroll’s role in a broader ecosystem
  • It helps translate vision into actionable pathways

Open Questions for Delegates

  • Does this direction (ecosystem BD) make sense for Scroll?
  • Does retroactive funding feel like the right mechanism?
  • What tools / projects should be included in the mapping?
  • What are we missing?
4 Likes

1. Should Scroll prioritize building an ecosystem of external financial tools (vs building everything internally)?

  • Yes, ecosystem-first approach
  • No, mostly build internally
  • Hybrid (depends on the Level/use case)
  • Not sure yet
0 voters

2. Does a retroactive funding model make sense for this type of BD activity?

  • Yes, it strongly aligns incentives
  • Yes, but with some constraints
  • No, prefer fixed roles and upfront allocation
  • Not sure
0 voters

3. Would you personally participate in this BD pilot under a retroactive model?

  • Yes
  • Maybe (depends on time / clarity)
  • No
0 voters

4. Do you think Verified Delegates can maintain sufficient BD quality with a shared script?

  • Yes, quality should emerge
  • Yes, but requires strong coordination
  • No, quality will likely be inconsistent
  • Not sure
0 voters

5. Which Levels do you think are most strategically relevant for Scroll to prioritize? (Select up to 2)

  • L0–L1 (Awareness)
  • L2–L3 (Control & Stability)
  • L4 (Compounding)
  • L5–L6 (Leverage & Freedom)
0 voters

6. Should Scroll invest (grants or internal) in gaps where no strong tools exist?

  • Yes
  • No
  • Depends on the gap
  • Not sure
0 voters
2 Likes

Building on this, I think the key is not “what tools to add”, but where Scroll is currently missing coverage across the Levels.

If we consider that USX already anchors the stablecoin layer, then the main gaps are not in core primitives, but in user flows on top of them:

Main gaps (and potential leads)

  • Onboarding (L0–L1): No clear path from “first interaction” → “financial system setup” (onramps + wallet + awareness are fragmented)
    → Leads: Transak, Ramp (on/offramps), Privy, Dynamic (wallet UX)

  • Automation (L2–L3): No “set and forget” layer on top of USX (saving, allocating, recurring actions)
    → Leads: Superfluid (streams), Gelato (automation), Safe (account abstraction flows)

  • Opinionated investing (L4): No default yield path (users must navigate DeFi manually)
    → Leads: Yearn (vaults), Aave (lending), Lido (LSTs)

  • Income / coordination (L5): Missing payment flows, payroll, revenue splitting (biggest structural gap)
    → Leads: Superfluid (again), Coordinape, Dework

  • Automation at scale (L6): No system that “runs itself” (AI / intent-based finance still undefined)
    → Leads: Gelato (again), emerging AI agent frameworks


I would suggest to validate:

  • Who is willing to integrate
  • Where strong partners already exist
  • Where Scroll may need to fund or build

Thanks for mapping out these L0-L6 gaps, @alexsotodigital . We completely agree that leveraging the delegate network’s social capital for BD is a massive opportunity to bring value to the DAO. However, regarding the compensation model, we’d suggest considering a hybrid approach—perhaps a 25% fixed / 75% retroactive split. Market dynamics dictate that even a perfect pitch might not close just due to timing on the partner’s end. Even if a deal doesn’t finalize, the delegate is still utilizing their network, spending time, and bringing back valuable market feedback that the DAO can learn from. A clear formula that covers the baseline effort and data-gathering, while heavily incentivizing the actual close, feels like a more sustainable path. Curious to hear other thoughts on.

To address those validation points: Willingness to integrate can be actively tested by tapping into our delegate networks to source warm leads seeking L2 expansion. Since strong partners already exist at the foundational DeFi layer (like stablecoins and DEXs), we shouldn’t waste resources reinventing those primitives. Instead, Scroll must strategically direct its funding and building efforts toward solving the critical L0 onboarding gaps and L4/L5 automation/payroll layers where user friction remains highest.

2 Likes

@Eren_DAOplomats Love the hybrid direction! Building on what you say, maybe we can reward signal and progression, not just final outcomes.

Proposed model:

  • 20% fixed → baseline effort (Predefined)
  • 30% pipeline value → progression through stages (Milestone-based)
  • 50% outcomes → actual partnerships / integrations (Retroactive)

For the pipeline piece, we could assign simple stages:

  • Qualified lead (meets criteria + real interest)
  • First call / validation
  • Active opportunity
  • Agreement / integration

On top of that, we could introduce a multiplier (e.g. 1x–3x) defined by Labs/Foundation, so they can prioritize certain types of partnerships (by Level, strategic relevance, etc.).

This is an interesting direction, and I like that it is being framed as a test rather than something we are rushing into.

From a legal and governance lens, what stands out first is the shift to retroactive rewards. Paying based on results sounds fair in theory, but in practice it only works if “results” are clearly defined upfront. Otherwise, we might be opening our doors to a lot of disputes, especially around what counts as a valid lead, who contributed what, and why one outcome is rewarded over another. That clarity needs to be locked in early, not interpreted at the end.

I also think the role split between delegates and the Foundation is important here. Delegates sourcing and the Foundation closing creates a natural gap in responsibility. If a deal falls through, it should be clear where the breakdown happened. Without that, accountability becomes blurry.

On the broader question, I agree with the direction of building through partnerships rather than trying to do everything in-house. It is a more scalable path. But it also means the DAO is stepping into a space that touches real-world relationships, expectations, and in some cases, regulatory exposure depending on the tools being onboarded. That part should not be taken lightly.

Overall, this feels like a solid experiment, but it will only work if the rules of engagement are straightforward and tight. Clear definitions, clean tracking, and transparent evaluation will matter just as much as the BD effort itself

1 Like

Really strong direction for the DCP pilot, especially the focus on producing decision-useful, region-specific insights.

One thing I’d add from a Malaysia perspective:

In markets like Malaysia, adoption is heavily driven by:

  • community touchpoints

  • offline-to-online onboarding

  • founder support ecosystems

  • localized partnerships (fiat rails, payment infra, etc.)

I’ve been working closely with builders and communities in SEA, and one pattern is clear:
:backhand_index_pointing_right: The gap isn’t awareness, it’s structured entry points into the ecosystem & currently we are building that.

Would be interesting to explore how outputs from this “State of Scroll” can plug directly into:

  • regional hubs / innovation centers

  • community-led onboarding funnels

  • builder activation programs

This could turn DCP from a research initiative into a full-stack ecosystem expansion engine.

Happy to contribute perspectives from the ground in Malaysia if helpful.