General Learnings & Insights
Unexpected Outcomes
Native Language = Better Insights Spanish-speaking delegates’ ability to speak to Harmonica in native language revealed previously hidden perspectives. This wasn’t just translation but genuine accessibility, suggesting significant latent insight for delegates whose native language isn’t English.
Mid-Session Pivot Capability The ability to adjust prompts based on early responses (Session 1’s shift from “stop” to “start” focus) demonstrated AI’s advantage over traditional surveys.
Expertise vs. Priority Session 2’s design allowed expertise-based domain selection, but this created tension with prioritization goals. Participants might deeply explore less important domains while skipping critical but unfamiliar ones.
Timing and Scheduling
Day of Week Monday workshops created unnecessary stress for both facilitators and participants. The rush to prepare after weekend breaks reduced quality of engagement. Future cycles should schedule workshops mid-week, allowing Monday for final preparation and participant reminders.
Conferences and Holidays DevCon and holiday periods created momentum losses requiring extensive recapping. Future cycles should either:
-
Avoid these periods entirely, or
-
Design them as natural break points in the process structure
Limited Time for Planning
In this instance, Harmonica was engaged a bit last minute relative to the start date due to some moving pieces for the governance team. This limited the ability to fine-tune the sessions and make their transition to workshops more smooth. While we see the overall process as a success, ideally there would be at least 3-4 weeks of preparation time ahead of the first sessions to leave ample time for planning and prompt engineering.
Consistency and Synthesis
Cross-session (org level) context Throughout the process, our domain taxonomy evolved in terms of domains… which resulted in slight inconsistencies in categorizing similar responses across sessions, using different terminology for equivalent concepts. This created additional synthesis work and potential confusion. As a potential solution, we will experiment with storing org-relevant knowledge, incl. taxonomies, e.g. in the form of MD file (similar to project instructions in Claude).
Lost in Translation Not everything surfaced in Harmonica sessions was addressed in workshops. Notably, Education emerged as the highest priority domain in Session 2 but was deliberately skipped in Workshop 2 given existing activities planned by other ecosystem stakeholders. This disconnect between async discovery and sync discussion needs explicit management and communication, both in the context of the specific exercise and more broadly amongst the major stakeholders.
Better Scope Management
Learning Curve Across Cycles
-
CCC1: Started with ambitions for a dozen finished proposals, ended with work priorities identified for the first months of 2025
-
CCC2: Was a rushed attempt at gathering ecosystem growth ideas. In hindsight, it would have been better to form the Ecosystem Growth Council earlier instead of pursuing CCC2
-
CCC3: Better calibrated expectations—goal was organizational design, led to a few councils being proposed an voted on in subsequent voting cycles
-
Key Learning: Each CCC revealed tendency to “bite off more than can be chewed”
This progression shows organizational learning, but the persistent challenge of over-ambition suggests need for even more conservative scoping or longer timelines.
Delegate Engagement
The governance team personally DM’ed every delegate to ensure participation. This high-touch approach is intentional, not a scaling failure:
-
Reflects deliberate culture-building in early DAO stages
-
Acknowledges governance participation has natural ceilings
-
Questions the assumption that governance should scale infinitely
We are planning to explore chatbots for Discord/Telegram to automate outreach while maintaining personal touch.
Improving Completion Rates
Several hypotheses for increasing session completion emerged:
UX Improvements
-
Clearer time expectations with progress indicators (“question 2 of 5”)
-
Separate pages instead of long chat with progress bar
-
Natural exit points after each domain exploration
-
Session resumption capability for interrupted participants
-
Shorter overall session windows
-
Mobile optimization (many delegates participate via phone)
Engagement Design
-
Calendar integration with automated reminders
-
Flexible stopping points that feel complete
-
More varied interaction modes (not just text)
Response Quality Considerations
The team identified need for quality metrics beyond completion rates:
Proposed Quality Framework
-
Total responses vs. completed responses
-
Quality distribution (brief/medium/detailed)
-
Drop-off point analysis
-
Clear criteria for “high-quality responses” (depth, specificity, actionability)
Quality Enhancement Strategies
-
Expertise-based matching (only invite people to domains they know well)
-
Integration with Hats Protocol for role-based invitation
-
More specific prompting with examples of desired depth
-
Multimodal input including sliders, buttons, and voice
-
Cross-pollination features (“what do you think of this popular opinion?”)
Incentives Design
Current Approach: Retroactive Rewards
-
Participants compensated after participation, not promised upfront
-
Avoids explicit “do X get Y” to prevent gaming
-
Maintains intrinsic motivation focus
Cultural Signal Setting
-
First round: Rewarded Negation Game participation
-
Second round: Rewards for both Negation Game and Harmonica usage
-
Message: “Use new tools, get rewarded” without explicit promises
-
Creates “pleasant surprise” rather than transactional relationship
Upcoming Evolution New “Governance Contribution Recognition” proposal launching imminently, representing first time delegates see both retroactive rewards and forward-looking incentive structure simultaneously.
Experimental Opportunities The governance team expressed openness to A/B testing explicit vs. surprise rewards, though noted challenge of creating proper control groups in small governance communities.
Post-Process Evolution
Continuous Iteration Even after CCC3 concluded, decisions continued evolving:
-
Alex Soto’s sociocratic org design proposal emerged independently
-
The governance team reconsidered accountability/ops council separation during proposal writing
-
What seemed like “final” decisions in workshops proved to be starting points
This highlights that CCCs are idea generation and validation exercises, not conclusion points. The real work happens in the months following, with proposals serving as living documents rather than fixed outcomes.
Reversing the Diverge-Converge Flow
CCC1 followed a traditional workshop-to-digital flow: sync workshops for idea generation → Pol.is for sentiment analysis and convergence. This approach generated rich initial discussions but created challenges in translating workshop energy into digital engagement, with technical issues in Pol.is limiting follow-through on promising clustering patterns.
Our CCC3 design deliberately reversed this sequence: Harmonica sessions for divergent thinking → sync workshops for convergence and prioritization. This “async-first, sync-second” approach offered several advantages:
-
Pre-populated workshops: Harmonica sessions generated categorized insights that could be directly imported into Miro boards, giving workshops a substantive starting point rather than blank canvases
-
Individual reflection before group dynamics: Participants could develop and articulate their thinking privately before navigating group consensus processes
-
Synthesis-ready outputs: AI-facilitated sessions produced structured, categorizable content that made workshop time more efficient for decision-making rather than idea capture
Both approaches have distinct strengths. The CCC1 model excels at building energy and shared understanding through real-time interaction, while CCC3’s model maximizes individual input and systematic organization. A future three-stage process could combine the best of both:
-
Harmonica sessions: Individual divergent thinking and gap analysis
-
Sync workshops: Collective prioritization and refinement with pre-populated content
-
Pol.is integration: Broader community sentiment testing on refined proposals
This would address the challenge of “concept overlap” by using AI to pre-organize themes, while preserving the democratic validation that Pol.is enables at scale. The key insight is that process sequence matters—starting with individual sensemaking before group dynamics, then validating with broader community sentiment, may optimize both participation depth and democratic legitimacy.
Takeaways for Future Cycles
For Scroll
CCC4 Design Considerations:
-
Timing: Avoid Monday workshops; mid-week provides better engagement
-
Duration: Consider 4-week cycles to maintain momentum
-
Preparation: Contract finalization 1 month prior, trial runs 2 weeks before
-
Integration: Tighter coupling between Harmonica outputs and proposal templates
Recommendations for Future Workshops
-
Pre-workshop Domain Scoping: Use async sessions to identify top 5 domains maximum
-
Expertise-Based Assignment: Match delegates to 2-3 domains based on their background
-
Progressive Prioritization: Build ranking exercises throughout, not just at conclusion
-
Reality Injection: Include budget/capacity constraints from the start
-
Structured Deep Dives: Provide consistent frameworks for domain exploration
-
Time Boxing with Flexibility: Firm limits with facilitator discretion for rich discussions
For Other DAOs
Adoption Framework:
-
Start Small: Begin with focused question on single domain
-
Test Internally: Run pilot with core team before delegate engagement
-
Invest in Prompts: Quality system prompts are crucial - budget time for iteration
-
Plan Synthesis: Allocate significant time for AI output review and workshop preparation
-
Maintain Flexibility: Be ready to adjust approach based on participant responses
Resource Requirements:
-
Technical: 1 person for Harmonica management
-
Facilitation: 2 people for synthesis and workshop design
-
Time: 2-week minimum for meaningful deliberation
-
Budget: Account for both platform costs and human facilitation
Conclusion
CCC3 represents a significant advance in participatory governance methodology. By combining Harmonica’s AI-facilitated deliberation with traditional workshop methods, Scroll DAO successfully navigated complex organizational design with unprecedented participation depth.
The cycle demonstrates that the attention economy challenges identified by governance researchers are not insurmountable. Through thoughtful application of AI augmentation, DAOs can achieve both inclusive participation and decision quality.
As noted by Eugene: “The goal isn’t to automate governance but to augment human deliberation. Harmonica helped us hear every voice while still converging on actionable decisions.”
This report contributes to the growing body of knowledge on DAO governance, offering both theoretical insights and practical tools. We invite other communities to build upon these learnings, adapting the approach to their unique contexts while contributing back to our collective understanding.
The future of DAO governance lies not in choosing between human or artificial intelligence, but in designing systems that amplify the strengths of both. CCC3 provides a foundation for this future, demonstrating that meaningful participatory governance at scale is not just possible but practical.
As Scroll DAO moves forward with implementation and other DAOs experiment with these methods, we anticipate continued evolution of AI-facilitated governance. The question is no longer whether AI should play a role in governance, but how to design that role to enhance rather than replace human agency and collective intelligence.
This report is published as part of Scroll DAO’s commitment to transparent governance and contribution to the broader DAO ecosystem. All session data, synthesis reports, and workshop materials are available on request.
For questions about implementing similar processes, feel free to contact the Harmonica team at hello@harmonica.chat