RFC: Regenesis Grants - Request for Comments
Managed by Regenesis Labs • Draft v1.0
Hello everyone,
DCL Regenesis Labs is preparing to kickstart a new season-based Grants Program for the Decentraland builder community. Before we launch, we want to share the proposed structure publicly and gather feedback to make sure the program is clear, fair, and impact-driven.
This post is an RFC (Request for Comments): nothing here is final yet. Please provide your comments before 2026-02-24T23:59:00Z
Why kickstart a Grants Program now?
Decentraland needs a steady supply of high-quality experiences, interactive content, and ecosystem improvements that can be shipped and adopted quickly.
Since the previous grants program was paused in February 2024 and later formally deprecated in October 2024, builders have had no consistent way to request structured financial support from the DAO treasury. In the meantime, Decentraland Foundation has been funding and coordinating some of this work, but relying on Foundation resources for initiatives that the DAO treasury is explicitly meant to support is not an ideal long-term model.
We’re proposing to relaunch a grants program now because:
-
Retention needs to improve. We want to increase engagement by enabling more interactive content for users, and useful ecosystem tooling for developers and creators to build that content.
-
Mobile changes the bar. New content and experiences should be designed with mobile use cases in mind. Most of the existing Decentraland ecosystem wasn’t built for mobile, so we need to accelerate the creation of mobile-friendly experiences (that still work well on desktop).
-
We need a tighter delivery model. This program is designed to fix three issues from the previous program:
- Unclear strategic direction
- Inconsistent budget comparability across proposals
- Over-focus on outputs instead of outcomes.
-
We want to test a smaller “micro-grant” approach. Fund smaller projects that can evolve, and if something performs well, Regenesis Labs can choose to incubate it further.
What is Regenesis Grants?
A season-based program run by Regenesis Labs to fund small, open-source projects that ship quickly and improve the Decentraland ecosystem. Regenesis Labs requested in its 2025/2026 operational budget $MANA 3,365,387 to fund this initiative.
Seasons
- Target: 3 seasons per year
- 2026 will run 2 seasons (first-year operational reality)
Categories
1) Content Development
We want to fund interactive content that increases engagement and retention, especially experiences that are:
- Social-first (better with others), but still enjoyable solo
- Built around short-session fun (easy to jump in, replayable loops)
- Designed with mobile use cases in mind for this first iteration (controls, UX, performance, readability)
Examples of what fits
- Mini-games and game loops (skill-based, co-op, party games, casual competitive)
- Interactive worlds with clear goals, progression, or repeatable activities
- Social mechanics layered onto gameplay (matchmaking, drop-in multiplayer, party-friendly design)
- Content that is optimized for mobile while still delivering a great desktop experience
Migration / porting of existing games and experiences
In addition to net-new work, we also want to fund migration/porting of strong experiences that already exist elsewhere.
We know switching platforms is a real investment of time and money. If you have a proven, high-quality game or interactive experience built on another UGC platform, we’re interested in funding the work required to bring it to Decentraland.
2) Tech Ecosystem
We want to fund tools and protocol-adjacent work that improve the ecosystem for creators, developers, and the broader Decentraland ecosystem. This track can be broader than mobile, as long as it clearly benefits the ecosystem.
Examples of what fits
- Creator tooling (pipelines, debugging, performance tooling, content optimization)
- New applications of the protocol (smart contracts, NFTs applications, MANA-related use cases)
- Reusable modules/libraries (developer building blocks)
- AI-related use cases that are practical (e.g., moderation helpers, creation workflows, agent frameworks that integrate into the experience)
Events are not excluded, but they are not a priority for this program. Both the Foundation and Regenesis Labs already have ongoing event strategies, so the preference here is interactive content and tooling.
What gets funded?
Projects must be:
- Shippable within 90 days
- Functional and publicly available
- Open source
- High quality (production-ready or clearly able to reach that bar within the season)
Max grant size: up to $15,000 USD $MANA equivalent per project (Payments will be made in MANA)
No guaranteed number of grantees: if quality is insufficient, we may fund zero projects in a season.
Non-Binding Community Signaling
All submissions will be public after the initial triage.
Community members can express non-binding interest via positive signal only.
- This is not a DAO vote
- It does not decide outcomes
- It helps reviewers understand community curiosity and potential demand
How selection works (proposed)
Submissions are reviewed by domain experts (“Domain Allocators”):
- 2 allocators for Content
- 2 allocators for Tech/Protocol
Reviewers can ask questions and request refinements during the review window.
Final selections are made by the Lead Allocator / Program Manager (role to be hired by Regenesis Labs).
We’re aiming for allocator anonymity (via stable pseudonyms) to reduce lobbying, while maintaining accountability internally.
Delivery, payments, and accountability
- Milestone-based payments (milestones defined by allocators with each grantee)
- Public check-ins:
- Day 0 kickoff
- ~Day 45 midpoint update
- ≤ Day 90 final delivery + metrics
- Kill switch: if a grantee fails two consecutive milestones, funding is terminated.
Proposed flow (end-to-end)
- Submission via category-specific form (problem, deliverables, timeline, budget, team info, open-source plan, and success metrics)
- Triage (basic quality and relevance check before publishing) → not the real evaluation but a lightweight filter to keep the public list readable and credible
- Public listing (Governance dApp dashboard + Forum thread + community signal)
- Review & iteration with allocator feedback → improve quality and feasibility within a fixed calendar
- Decision (fund or not)
- Outcomes published + status tracking in Governance dApp
- Delivery phase (≤ 90 days)
- Final reporting + allocator assessment → feeds into Season Report
Tentative timeline for Season 1
-
Submissions open: mid-March 2026
-
Decisions announced: ~1 month after submissions open/close
-
Build period: up to 90 days from kickoff
(Exact dates will be defined once this RFC closes.)
Success criteria (Season 1)
We want to be explicit about what “good” looks like:
- ≥ 75% of funded projects ship
- At least 1 project is later incubated by Regenesis Labs (RGL-only decision)
- A public Season Report is published with outcomes, metrics, spend, and learnings
Anticipated question
“Isn’t this centralized / why no DAO vote?”
Yes, there is a centralized decision point in this proposal, and it’s intentional for now. The goal of Season 1 is to ship high-quality outcomes quickly with clear accountability.
To balance this, we’re designing the program to be high-transparency and outcome-driven:
- All submissions are public from day 1
- Community has an interest signal method
- Payments are milestone-based, with a clear kill switch
- We publish a Season Report every season
On allocator anonymity: it’s an idea we would like to test as a trade-off to reduce lobbying and social pressure. Allocators would use stable pseudonyms publicly, while Regenesis Labs enforces strict conflict-of-interest rules internally.
Feedback requested
Please provide your feedback before 2026-02-24T23:59:00Z
We’d love community feedback on the following:
A) Program structure
- Is the program easy to understand? Anything confusing or underspecified?
- Any concerns about selection, conflicts of interest, transparency, or reviewer structure?
- Is the Mobile-first requirement realistic and well-scoped? What would you change?
- Is $15k / 90 days a good constraint for high-quality results?
- Does the ommunity signaling mechanic make sense as a non-binding signal? What would improve it?
- Using a standard submission form + tracking via Governance dApp dashboard + discussion on the Forum: does that feel sufficient for Season 1?
B) Domain Allocators (we want your help shaping this)
We want specific community feedback on the Domain Allocator model, especially selection and methodology.
- What would you consider a fair and credible way to select Domain Allocators? (e.g., open call + screening, nominations, rotating pool)
- What experience should we prioritize for each track? (e.g., shipped content, community credibility)
- Do you prefer allocators to be publicly named, or pseudonymous to reduce lobbying? Why?
- What criteria should allocators use to evaluate proposals? (e.g., impact, feasibility, quality bar)
How allocators should interact with applicants:
- What’s the right balance between “filtering” vs “helping applicants iterate”?
- What would you want to see in terms of feedback quality and transparency?
Conflict-of-interest rules
- What COI rules do you expect?
- Are there additional constraints you’d want to ensure the process is fair?
C) What are we missing?
- What do you think could go wrong, and how should we design against it?
How to comment
Please reply in this thread with your feedback before 2026-02-24T23:59:00Z.
Thanks a lot! Your input will shape the first iteration of Regenesis Grants.