RFC: Regenesis Grants Program — Request for Comments

RFC: Regenesis Grants - Request for Comments

Managed by Regenesis Labs • Draft v1.0

Hello everyone,

DCL Regenesis Labs is preparing to kickstart a new season-based Grants Program for the Decentraland builder community. Before we launch, we want to share the proposed structure publicly and gather feedback to make sure the program is clear, fair, and impact-driven.

This post is an RFC (Request for Comments): nothing here is final yet. Please provide your comments before 2026-02-24T23:59:00Z


Why kickstart a Grants Program now?

Decentraland needs a steady supply of high-quality experiences, interactive content, and ecosystem improvements that can be shipped and adopted quickly.

Since the previous grants program was paused in February 2024 and later formally deprecated in October 2024, builders have had no consistent way to request structured financial support from the DAO treasury. In the meantime, Decentraland Foundation has been funding and coordinating some of this work, but relying on Foundation resources for initiatives that the DAO treasury is explicitly meant to support is not an ideal long-term model.

We’re proposing to relaunch a grants program now because:

  • Retention needs to improve. We want to increase engagement by enabling more interactive content for users, and useful ecosystem tooling for developers and creators to build that content.

  • Mobile changes the bar. New content and experiences should be designed with mobile use cases in mind. Most of the existing Decentraland ecosystem wasn’t built for mobile, so we need to accelerate the creation of mobile-friendly experiences (that still work well on desktop).

  • We need a tighter delivery model. This program is designed to fix three issues from the previous program:

    1. Unclear strategic direction
    2. Inconsistent budget comparability across proposals
    3. Over-focus on outputs instead of outcomes.
  • We want to test a smaller “micro-grant” approach. Fund smaller projects that can evolve, and if something performs well, Regenesis Labs can choose to incubate it further.


What is Regenesis Grants?

A season-based program run by Regenesis Labs to fund small, open-source projects that ship quickly and improve the Decentraland ecosystem. Regenesis Labs requested in its 2025/2026 operational budget $MANA 3,365,387 to fund this initiative.

Seasons

  • Target: 3 seasons per year
  • 2026 will run 2 seasons (first-year operational reality)

Categories

1) Content Development

We want to fund interactive content that increases engagement and retention, especially experiences that are:

  • Social-first (better with others), but still enjoyable solo
  • Built around short-session fun (easy to jump in, replayable loops)
  • Designed with mobile use cases in mind for this first iteration (controls, UX, performance, readability)

Examples of what fits

  • Mini-games and game loops (skill-based, co-op, party games, casual competitive)
  • Interactive worlds with clear goals, progression, or repeatable activities
  • Social mechanics layered onto gameplay (matchmaking, drop-in multiplayer, party-friendly design)
  • Content that is optimized for mobile while still delivering a great desktop experience

Migration / porting of existing games and experiences

In addition to net-new work, we also want to fund migration/porting of strong experiences that already exist elsewhere.

We know switching platforms is a real investment of time and money. If you have a proven, high-quality game or interactive experience built on another UGC platform, we’re interested in funding the work required to bring it to Decentraland.

2) Tech Ecosystem

We want to fund tools and protocol-adjacent work that improve the ecosystem for creators, developers, and the broader Decentraland ecosystem. This track can be broader than mobile, as long as it clearly benefits the ecosystem.

Examples of what fits

  • Creator tooling (pipelines, debugging, performance tooling, content optimization)
  • New applications of the protocol (smart contracts, NFTs applications, MANA-related use cases)
  • Reusable modules/libraries (developer building blocks)
  • AI-related use cases that are practical (e.g., moderation helpers, creation workflows, agent frameworks that integrate into the experience)

Events are not excluded, but they are not a priority for this program. Both the Foundation and Regenesis Labs already have ongoing event strategies, so the preference here is interactive content and tooling.


What gets funded?

Projects must be:

  • Shippable within 90 days
  • Functional and publicly available
  • Open source
  • High quality (production-ready or clearly able to reach that bar within the season)

Max grant size: up to $15,000 USD $MANA equivalent per project (Payments will be made in MANA)

No guaranteed number of grantees: if quality is insufficient, we may fund zero projects in a season.


Non-Binding Community Signaling

All submissions will be public after the initial triage.

Community members can express non-binding interest via positive signal only.

  • This is not a DAO vote
  • It does not decide outcomes
  • It helps reviewers understand community curiosity and potential demand

How selection works (proposed)

Submissions are reviewed by domain experts (“Domain Allocators”):

  • 2 allocators for Content
  • 2 allocators for Tech/Protocol

Reviewers can ask questions and request refinements during the review window.

Final selections are made by the Lead Allocator / Program Manager (role to be hired by Regenesis Labs).

We’re aiming for allocator anonymity (via stable pseudonyms) to reduce lobbying, while maintaining accountability internally.


Delivery, payments, and accountability

  • Milestone-based payments (milestones defined by allocators with each grantee)
  • Public check-ins:
    • Day 0 kickoff
    • ~Day 45 midpoint update
    • ≤ Day 90 final delivery + metrics
  • Kill switch: if a grantee fails two consecutive milestones, funding is terminated.

Proposed flow (end-to-end)

  1. Submission via category-specific form (problem, deliverables, timeline, budget, team info, open-source plan, and success metrics)
  2. Triage (basic quality and relevance check before publishing) → not the real evaluation but a lightweight filter to keep the public list readable and credible
  3. Public listing (Governance dApp dashboard + Forum thread + community signal)
  4. Review & iteration with allocator feedback → improve quality and feasibility within a fixed calendar
  5. Decision (fund or not)
  6. Outcomes published + status tracking in Governance dApp
  7. Delivery phase (≤ 90 days)
  8. Final reporting + allocator assessment → feeds into Season Report

Tentative timeline for Season 1

  • Submissions open: mid-March 2026

  • Decisions announced: ~1 month after submissions open/close

  • Build period: up to 90 days from kickoff

    (Exact dates will be defined once this RFC closes.)


Success criteria (Season 1)

We want to be explicit about what “good” looks like:

  • ≥ 75% of funded projects ship
  • At least 1 project is later incubated by Regenesis Labs (RGL-only decision)
  • A public Season Report is published with outcomes, metrics, spend, and learnings

Anticipated question

“Isn’t this centralized / why no DAO vote?”

Yes, there is a centralized decision point in this proposal, and it’s intentional for now. The goal of Season 1 is to ship high-quality outcomes quickly with clear accountability.

To balance this, we’re designing the program to be high-transparency and outcome-driven:

  • All submissions are public from day 1
  • Community has an interest signal method
  • Payments are milestone-based, with a clear kill switch
  • We publish a Season Report every season

On allocator anonymity: it’s an idea we would like to test as a trade-off to reduce lobbying and social pressure. Allocators would use stable pseudonyms publicly, while Regenesis Labs enforces strict conflict-of-interest rules internally.


Feedback requested

Please provide your feedback before 2026-02-24T23:59:00Z

We’d love community feedback on the following:

A) Program structure

  • Is the program easy to understand? Anything confusing or underspecified?
  • Any concerns about selection, conflicts of interest, transparency, or reviewer structure?
  • Is the Mobile-first requirement realistic and well-scoped? What would you change?
  • Is $15k / 90 days a good constraint for high-quality results?
  • Does the ommunity signaling mechanic make sense as a non-binding signal? What would improve it?
  • Using a standard submission form + tracking via Governance dApp dashboard + discussion on the Forum: does that feel sufficient for Season 1?

B) Domain Allocators (we want your help shaping this)

We want specific community feedback on the Domain Allocator model, especially selection and methodology.

  • What would you consider a fair and credible way to select Domain Allocators? (e.g., open call + screening, nominations, rotating pool)
  • What experience should we prioritize for each track? (e.g., shipped content, community credibility)
  • Do you prefer allocators to be publicly named, or pseudonymous to reduce lobbying? Why?
  • What criteria should allocators use to evaluate proposals? (e.g., impact, feasibility, quality bar)

How allocators should interact with applicants:

  • What’s the right balance between “filtering” vs “helping applicants iterate”?
  • What would you want to see in terms of feedback quality and transparency?

Conflict-of-interest rules

  • What COI rules do you expect?
  • Are there additional constraints you’d want to ensure the process is fair?

C) What are we missing?

  • What do you think could go wrong, and how should we design against it?

How to comment

Please reply in this thread with your feedback before 2026-02-24T23:59:00Z.

Thanks a lot! Your input will shape the first iteration of Regenesis Grants.

4 Likes

I honestly have no idea what is good or bad regarding the new grant strategy. But seems like it’s good. worth a shot.

7 Likes

Hi everyone,

I worked on DCL projects like Golfcraft and recently Rage Parkour and The Photography Gallery, among others. I also have a decade of experience making mobile games for Play365 and some years develop blender addons like Playblast.

I understand the “Mobile-First” optimization challenges, and I have some proposals:

1. Education & Training:
To achieve “Mobile-First” quality, creators need to learn optimization (draw calls, atlases, strict polycounts, profiler tools).

Proposal: Can the “Tech Ecosystem” category fund a team dedicated to technical training and documentation? This is vital to help creators make good content for mobile. Currently I’m creating tutorials (eg: 1, 2) for the community about emotes, wearables, etc.


2. Templates & Assets:
I collaborated on the “Streamers Template” for Creators Hub and I believe we shouldn’t reinvent the wheel on every scene.

Proposal: Can we fund optimized, first class asset packs (Game Kits) or code libraries? A dedicated team could maintain these public resources. This would help the community build faster and better for mobile.


3. Workflow Automation (Blender/Python):
I have proposals to automate the creation of Emotes, Wearables, and Buildings using Python scripting in Blender lowering the barrier to entry for new creators. I will elaborate on this proposal in another document later.

Proposal: Are automation tools eligible? This would speed up the workflow for all creators, benefiting the entire community

Thanks for this initiative! Looking forward to the next steps.

6 Likes

I would like to start by saying that I am very impressed by this grant proposal as it does appear to address some immediate concerns fueled by our prior grants program.

While I support fostering the growth and empowerment of users, I think it also important to stress the significance of:

Properly valuing future incoming project funding requests given the emerging technologies in this space.

Proper vetting of team capabilities on incoming project funding requests.

What constitutes a proven, high-quality game or interactive experience built on another UGC platform?

This is exciting! A new grants program, setup with enough transparency and documentation, would be a wonderful thing for DCLs growth. <3

3 Likes

Quick heads up:

We’ve extended the response deadline to 2026-02-28T23:59:00Z
Thanks to everyone who has already shared their feedback!

2 Likes

Sound good to me, I have some good content I could apply for.

2 Likes

Hello, I am very excited by this idea and by the fact that a consultation like this exists.

The basic proposal is already very well established.

my personal thought:

A)

Yes the program is easy to understand.

Mobile focus is relevant.

The amount limit is also correct.

Submission form + Gov dApp dashboard + Forum seems sufficient.

B)

About something fair : Open call, rotating random nomination (draw) in the pool.

About what exp to prioritize : balance between shiped content/community credibility (community credibility should not sound like lobbying but it’s a core aspect of dcl so very important to considere it)

Allocator should be Pseudonymous , associate to the random nomination in the pool of “allocators” it allow to minimyze the possibility & temptation of corruption. Also this random system should not allow to select an allocator who are involved, directly or not.

Criiteria mentionned are good to me.

Allocators should act as “guide” in the process, helper with ressources, links, docs, and clarify some stuff but they should not do the job, the proposal owner should. Idk , maybe i don’t understand the question.

C)

Sometimes delivered product are far from what is expected, so my tought is maybe we need colateral. I’m familiar with an other dao and the grant program is something very interesting but hard to keep healthy. Even for “small” amount, we need to be precautionous on what we’re really pay for.

1 Like

Hi Seb, thanks for your comment.
I like the idea of having a pool of allocators and drawing one from the pool to assess projects. My only concern with that approach is how many capable people we will get. Maybe we can start small in the first season, and if everything goes fine, we can iterate on this mechanism. I really like this, thanks!

Could you tell me more about the collateralization aspect you’re thinking of? Another mechanism would be retroactive funding, but I feel that restricts a lot the type of applicants we can get (Basically people who can work for free and accept the risk of not getting any compensation).

2 Likes

Thanks for the feedback @emm_DCL
About your question

I would say: A game or experience that is fun, high quality, and has proven, healthy engagement metrics (% of returning users, retention D1/D7, MAU, basic interactive content metrics)

1 Like

Hi @carlosmu thanks a ton for this! I have some thoughts:

We are thinking of some enablement activities for once the v1 of the mobile app launches, with workshops, documentation updates, and Claude skills ready for people to use and develop mobile-optimized scenes.

I believe Foundation is funding a bunch of those templates so I would try to not incur on double-spending here.

ABSOLUTELY! Anything helping creators build scenes or content could end up in the Tech Ecosystem category

2 Likes

Ciao @ginoct
If I understand correctly:

  • Mid-March will see the possibility of submitting an application with relevant project information.
  • Mid-April will see me notified whether or not I’m admitted to the project.
  • The project must be completed within 90 days of kickoff.

FEEDBACK REQUEST
A) PROGRAM STRUCTURE:

  1. Does the kickoff day coincide with the announcement of project admission, scheduled for around April 15th?
    Do the 90 days to complete the project start on April 15th, or is there a window of time between project admission and kickoff day?

  2. KYC identification of project applicants is obviously a given.
    Is KYC identification of internal and/or external professional partners involved and associated with the project also expected?

  3. Will there be reporting with proof of payments to ensure traceability of the funds used? If so, is it enough to simply attach a TX payment link, or is there a more in-depth procedure where I can attach invoices, bills, contracts, or any other document with proven validity?

Neither the KYC identification of the partners involved nor the thorough traceability of the funds is a problem, but knowing this would help me identify the right professional partners I can rely on who comply with the guidelines and outline the product concept as far in advance as possible.

B) DOMAIN ALLOCATORS:
4) I consider the “open call + screening” method a fair and credible way to select domain allocators because it is inclusive and transparent with anti-capture safeguards.

  1. The experience we should prioritize for each track is shipped content (concrete results produced), because the results produced reflect an effective and tangible commitment over time.

  2. I would prefer allocators be publicly named, because this way, potential lobbies would be immediately identified.

  3. The criteria that allocators should use to evaluate proposals should be adapted and calibrated based on the specifics of the project itself.
    You can’t judge a fish by its ability to climb a tree.

  4. The right balance between “filtering” vs. “helping applicants iterate” should lean toward “helping applicants iterate,” because sometimes inexperience of applicants and rigidity of procedure preclude the consideration of valid but incorrectly exposed projects.

Conflict-of-interest rules
9) I clearly understand that the finished project will be Open Source, but it only becomes so once it has been accepted, developed, and finished.
Will measures be implemented to ensure ownership and intellectual property of the project submitted and rejected?

C) What are we missing? W It is also expected a category of centralized collateral funds for projects that are not entirely tech/protocol-adjacent but would bring tangible benefit to the entire Decentraland ecosystem (I don’t mean events) ?

Thank you to asking our opinions

1 Like

I’m a little late, but here’s my 2c:

  • The $15k/90-day structure feels much more reasonable than other funding models

  • The timeline seems reasonable - fast enough that projects shouldn’t stall/lose momentum, but not rushed.

  • Milestone-based payouts are ideal - t’s much more sustainable for indie devs to get partial payments during development than waiting for a lump sum at the end.

  • I do not feel that the Domain Allocators should be anonymous - in practice that risks quiet favoritism; friends will still know who they are, and transparency feels much healthier for the ecosystem.

  • I’d also prefer a rotating pool of allocators. Rotation would help ensure fairness over time, prevent any single individual from continually shaping the direction of funding, and reduce the risk of similar ideas being repeatedly rejected based on one persons preferences.

  • I’m happy to see external tooling as eligible as I, like others, have built Blender plugins and pipeline tools which could be improved and shared.

  • With the upcoming mobile client and new backend features announced at the all-hands, this feels like a great moment to restart the grants system. I’m looking forward ot seeing what gets built!

2 Likes

Hi everyone,

Everything here sounds good.
I’m excited about the Regenesis Grants Program.

The program is easy to understand.

As a non-mobile player, I personally prefer the term and concept of “mobile-friendly” rather than “mobile-first.”

I believe it is really important that the allocators prioritize feasibility above all.

Community builders have encountered performance issues with the multiplayer system. Beyond a small number of avatars present simultaneously in a scene, the current serverless multiplayer system no longer functions as expected.

It might therefore be worthwhile to conduct a technical audit of what is actually really feasible in Decentraland before deciding to move forward with a project.

During the last Community All Hands event, we saw that major features are currently under development, such as server-side logic.

How will the projects produced through the grants be maintained over time to address deprecated elements, dependency updates, and integration with new features?
Will there be some kind of guarantee that granted builders won’t abandon their projects for a defined period after delivery?

Thanks for keeping things moving forward.

2 Likes

Appreciate the transparency and structure here — this is a strong step toward consistent builder support.

I’d like to share perspective from running live, recurring experiences inside Decentraland such as Crypt0M1notaur Transmissions and AFTER-HOURS.

These are not one-off events — they function as behavioral loops:
users return weekly, bring others, and build familiarity with the environment over time. That repeat engagement is where retention actually compounds.

A few observations that may strengthen the program:

1. Events as Retention Infrastructure (not just “events”)
While events are listed as non-priority, recurring live experiences can act as:

  • onboarding funnels

  • social anchors

  • testing grounds for new mechanics

In practice, they are often the entry point into interactive content.

2. Hybrid Category Opportunity (Content × Events × Systems)
There’s a gap between:

  • pure “events”

  • and fully built “games/tools”

Live formats like Transmissions and AFTER-HOURS sit in between:

  • interactive

  • social-first

  • repeatable

  • measurable (attendance, retention, participation loops)

Consider a sub-category or flag for:
→ “Recurring Interactive Experiences”

3. Measurable Outcomes Already Exist
Live events naturally produce:

  • DAU spikes during sessions

  • repeat attendance patterns

  • social coordination behaviors (groups, invites, shared presence)

These align directly with the program’s shift from outputs → outcomes.

4. Mobile-Ready Social Design
Short-session, drop-in experiences already map well to mobile behavior:

  • quick join

  • immediate feedback (music, visuals, people)

  • low learning curve

This could be a fast win category for Season 1.

5. Micro-Grant Fit
Smaller grants could:

  • upgrade existing live experiences with new mechanics

  • add lightweight game loops

  • integrate tooling (AI agents, rewards, matchmaking, etc.)

This allows proven formats to evolve instead of starting from zero.


Suggestion:
Include a pathway where existing, recurring experiences can apply — especially those that:

  • demonstrate repeat engagement

  • can layer in interactivity

  • can ship upgrades within the 90-day window

  • :call_me_hand:

2 Likes

Sorry for late reply ginoct:

For pool of allocators , you right the first step is to find few people with the abilities to evaluate prooposal. Also , yes, it’s probably better to start small. I guess allocators should be anonyme since the beginning of the process, and they should not say to anyone they are. Even if there are not “allocators” for “this session”.

About collateralization i was thinking about asset as collateral. People can not work for free or waiting months for payement but they need to deliver, and what if they don’t, money is just lost, i saw too much “fake” delivered project. As we’re web3, I guess all peope own asset. It could be anything with value.