[DAO:7eff936] Vtubing app with Decentraland Avatars

Hi there, I love the concept as I currently VTube as my DCL avatar by exporting my avatar into .vrm format and using VSeeFace and other open source clients.

While I think this would be useful, I would be much more interested in something that takes our current avatar endpoint (.glb files) and converts them to a .vrm file to be used in many current opensource facial tracking and VR clients. It seems your team has the skills and capabilities to achieve this and fulfill the full mapping of mouth/eye/eyelid shape keys to the default avatar faces - it would be great if a module was added so that wearable creators or the community can provide the shape-keys alongside their .glb files - as there is a large number of both default and custom eyes/mouths/faces available (some face replacements are masks/helmets so need a way for community artists to upload and also map shapekeys that they create for their .glbs).

The idea of NFT backgrounds sounds limiting and frustrating, VSeeFace enables a transparent background to use any image, if users wish to purchase a background NFT and use that image they can do so, else limiting creative freedom and interoperability.

If the proposal was changed to provide an open sourced library or website that accomplishes the above and exports a useable .vrm file from current avatar, I would gladly vote yes as I think this is incredibly valuable to the community. A proof of concept would be required due to the technical complexity, but given this requirement I believe a higher grant amount would be understandable.

You can read some of the communities current thoughts on this topic here, a proposal that passed overwhelmingly: [DAO:qaudhgm] Enable .vrm support for Decentraland models

2 Likes