[DAO:7eff936] Vtubing app with Decentraland Avatars

by 0x7e22d9514a99793e80d4fb5e3a9dfd4931f47c4c (TonyDarko#7c4c)

Should the following Tier 4: up to $60,000 USD, 6 months vesting (1 month cliff) grant in the Platform Contributor category be approved?

Abstract

Vtubing application that will allow for different uses expanding the digital identity even outside of decentraland. The web 3 vtubing application will offer the possibility of maintaining a digital identity based on the Decentraland avatar, expanding customization options for users and providing the community with the opportunity to design, mint, and sell these assets, generating new interactions in the marketplace.

Grant size

57,000 USD

Beneficiary address

0x7e22d9514a99793E80D4FB5E3A9Dfd4931F47c4c

Email address

tonydarko.venta@gmail.com

Description

Vtubing has gained a lot of popularity in recent years. Our interest is in providing an easy-to-use application that allows the community to use a Decentraland avatar in meetings, record content, stream, etc. In addition, we propose a web 3 integration in the application, thus generating a new category of NFTs called “Decentraland Backgrounds”. Being NFTs, will have the rarity categories in same way that wearables, the interested in creating their backgrounds would have to pay a verification fee and the Decentraland committee will verify that the measures, content, format and weight of the images are appropriate for verification, the verification fee will be defined by the DAO. Our interest is in being able to provide an easy-to-use application, capable of tracking facial expressions as well as head and arm movement, and also allowing for aesthetic changes to the avatar, allowing the community to use a Decentraland avatar in meetings, recording content, streaming, etc. In addition, by integrating web 3, we propose the possibility of minting backgrounds in the application, thereby generating a new category of NFTs called “Decentraland Backgrounds.”
These backgrounds give the community the option to create and sell these assets, just like emotes or wearables, adding more transactions and uses to the marketplace.

Specification

  • The user interface will allow you to change aspects such as skin tone, clothing, and hair, just as you can in the Decentraland interface. These changes will not be NFTs; they will be part of the same application and will not be the same attributes as the application, although they will be inspired by them.

  • Full screen & green screen, This option is very versatile and has many uses.

  • The backgrounds, being NFTs, will have the same rarity categories as the other NFTs in the community, which opens the possibility of not only identifying yourself in your digital identity with a Decentraland avatar, but also with other communities, just like wearables do, but in this case, not only within Decentraland.

  • In the same way that wearables, the community interested in creating their backgrounds would have to pay a verification fee and the Decentraland committee will verify that the measures, content, format and weight of the images are appropriate for verification, the verification fee will be defined by the DAO and the sale price for each background by the community.

  • This same principle can be applied to future actions within the same application, for now we would set the foundations of a tool of great usefulness not only for the Decentraland community, but also for the web 3 community in general, attracting more users to Decentraland.

  • In our vision of the app, we would like to add the same concept of NFT to VRM avatars created by the community and a series of extras that will be present in the future, however it is important to develop the foundations correctly with the corresponding time.

Full document: ADDON.pdf - Google Drive

Personnel

Meet Tony Darko (Project Manager, 3D Designer) Graphic Designer with over 4 years of experience with Blender, creator of Darko Studio, Designer for “Calaverse,” festival in DCL, verified partner for the DAO proposal, selected for MVFW 2022 and DCL Film Club 2022, part of the Imagine to Create project (DCL galery/launcher), my 3D work has been exhibited at theaterLab in New York (2021) and in the Luminaria Contemporary Arts Festival in Texas.

Meet Jhoico (game developer/programmer): 15 years of experience in software, website, app, and video game development, I have spent around 12 years creating web content, browser-based video games, knowledge in Flash, HTML, JavaScript (typescript), PHP, CSS, Flex, Xcode. Currently, I am creating video games using the Unity game engine and working as a developer at Darko Studio.

Roadmap and milestones

To develop the app, the best way to make it usable with any camera is to use mediapipe and open cv, that is, python. The 3D models compatible with these libraries must be able to be used with the OSC protocol, which is why we decided to use VRM formats in a unity build. This way, we control the movement of the head and arms of the avatar in real time, for the facial expressions, the models must have shapekeys for each expression, these shapekeys have a range of 0 to 1 and will be controlled by the information obtained thanks to the OSC directly from any camera. The breakdown of operations is as follows:

  • March 1st: We will present the user interface, including how icons and menus will look within the app, as well as the base avatars that will be available for use within the application (including accessories, hair, and clothing).
  • May 1st: We will present the first beta, along with the necessary documentation for the community to use it. It is necessary to ensure that the application functions correctly and meets the specified requirements. This includes testing for functionality and performance to ensure that the application runs smoothly and efficiently. It is also important to test the application on different devices and platforms to ensure that it works correctly on all of them.
  • July 1st: The version presented on this day will be the optimized version based on the problems detected and solved.

Vote on this proposal on the Decentraland DAO

View this proposal on Snapshot

@SWISS Would you mind doing a tutorial on how you made your DCL Vtube avatar in your videos? That might save the DAO 57k USD.

1 Like

Hi @HPrivakos, The proposal is not about how to make a vrm avatar, there are already many tutorials about it, it is about having a specific application for decentraland with the option to use NFTs, being so, at least one of the first vtubing web3 apps, let me explain why this was the best option we came up with:
my twitter example first: https://twitter.com/TonyDarko_eth/status/1541371812554199041?s=20&t=0LdItZ8aPVWabG4J82KANg
Initially my avatar used a virtual camera in eevee in real time, that means that the lighting, the camera with the focus (depth of field) and many other things could be changed or interact in real time, besides, using eevee the result is aesthetically better, the problem is that it requires some hardware and some knowledge base in blender to use the model correctly.
We did a test trying to reduce the steps, using open cv inside blender with my webcam, the result is not bad but it has a delay and consumes too many resources, this is because blender is based on python and the ideal for this kind of things is to have a software based on C++ (just like Unity and Unreal) this is the test we did in blender with a camaraweb ethereans.mp4 - Google Drive

In our app we will incorporate not only a system of shapekeys, we will use the standard of arkit (52 shapekeys) which give us a better fidelity in terms of tracking expressions and a system of shapekeys own (based on shapekeys for AEIOU + extra emotions), compatible with any camera in an executable for mac and windows, the result in an application with a tracking very similar to the result we had in blender but without the technical difficulties involved in its use, in fact the blender file I have it public if you want to review it.
But the important thing here besides that are all the possibilities to have in the app during its initial launch and future applications:

  • Being able to tokenize the backgrounds by making a new category is one of the most favorable points of the dao, in my opinion it means to increase the transactions of the marketplace, with a new category just like this year the emotes or skins.
  • In the document added, we mentioned that within the roadmap is the possibility of adding another category of dcl avatars created by the community, in that case, if @SWISS makes a tutorial, it will serve for the community to upload their avatars (regardless of whether there is documentation). at that time he could have his avatar as an nft within the decentraland application and not a third party one.

Another point that we have in the roadmap is to be able to add a third-party api to be able to generate video calls directly from the app, for now it will only be possible to generate a virtual camera just like in OBS but, we believe that the project has a lot of potential, these last two points we do not add them because it is unrealistic to be able to meet them in 6 months, however they are already contemplated in the general roadmap (among other things). We are aware of the work and the content of swiss and for that reason I can tell you that it is not the same, happy holidays.

Hey all. The most complex thing to overcome is actually the creation of a .vrm avatar counter part.

Everything else is widely available on the web and a few DCL community members have used avatars for that purpose. Depending on how you do it and what tools you use, some had better outcomes than others.

I believe toxsam has tweeted about an app they have created to work with the cryptoavatars ecosystem a few days back.

More importantly.

I had like 5-10 people message me over the past 2 years asking me what I was using for YouTube. I am aware of 1-2 who have actually done it by spending 30 minutes researching. Nobody else did. I do not think it is worth going through the trouble of creating an app for so little demand, especially that you can do it with no app.

I would much rather have a .vrm avatar export tool that delivers an a vrm file from any wearables combination on demand. That would be really useful for those who want to have some portability of the identity into other worlds.

Many people think they want to do something on YouTube and never make it beyond a first video. Because it is hard and because there is not enough of an audience to really justify all the work that goes into it.

Peace.

3 Likes

Your point seems interesting to me and I partially agree with it, I think that having a basic knowledge of 3D you can achieve having a vrm avatar in a relatively easy way, but for someone who has zero knowledge it is a problem, that is, they know the steps to follow But how could you do it if you don’t understand what a shapekey is or the difference between a glb and a vrm? That limits many people and I think that when we have a certain level we take terms or points that other people are unaware of.

The purpose of this proposal is to have the best result as simple as possible, just turn on your camera and open the app, and that’s it, the more accessible it will be, the more widely used technology will be, with this I don’t want to say that everyone should learn 3d, precisely the That a person without even knowing what is a shapekey but he can use an avatar is a great advance, I don’t understand how microwaves affect water molecules, I just press a button and that’s it, hot water, let me understand?

On the other hand, as an addition to the app, there is the NFTs integration with what I already explained, just as you say, polygonal mind released an app for its avatars, if it were easier to have an avatar in any other app, why get your own? The idea is to lay the foundations for further development within that app

I really don’t see the utility in making a “NFT/Web3” version of something that already exist.

or making it worse that something that already exist by trying to tokenize everything? Just let people use any jpg they want as background for free.

That can probably be done for a lower tier grant.
Thanks for your reply Swiss

1 Like

Hello @HPrivakos,

I disagree, I think yes makes sense to make web3 applications of thing that already exists because that is the meaning of toquenize things… people could be creative with backgrounds same way like NFT art…
Think… money already exist but Bitcoin makes sense, jpg exist but NFT art makes sense, virtual worlds exists but we have here Decentraland…

The proposal could open a news way to create and interact in the marketplace what I think is very positive.

Thanks for the open discussion

1 Like

I think Tony is really talented, I’m in love with his work. However, after reading the whole discussion, I don’t find this project very appealing since the market is very small and there are already free tools to do something similar.

2 Likes

Voting no, as you are asking for 3 separate grants (60k USD total) and imo you should show proof of concept or get interest in your work from community first.

1 Like

Hi there, I love the concept as I currently VTube as my DCL avatar by exporting my avatar into .vrm format and using VSeeFace and other open source clients.

While I think this would be useful, I would be much more interested in something that takes our current avatar endpoint (.glb files) and converts them to a .vrm file to be used in many current opensource facial tracking and VR clients. It seems your team has the skills and capabilities to achieve this and fulfill the full mapping of mouth/eye/eyelid shape keys to the default avatar faces - it would be great if a module was added so that wearable creators or the community can provide the shape-keys alongside their .glb files - as there is a large number of both default and custom eyes/mouths/faces available (some face replacements are masks/helmets so need a way for community artists to upload and also map shapekeys that they create for their .glbs).

The idea of NFT backgrounds sounds limiting and frustrating, VSeeFace enables a transparent background to use any image, if users wish to purchase a background NFT and use that image they can do so, else limiting creative freedom and interoperability.

If the proposal was changed to provide an open sourced library or website that accomplishes the above and exports a useable .vrm file from current avatar, I would gladly vote yes as I think this is incredibly valuable to the community. A proof of concept would be required due to the technical complexity, but given this requirement I believe a higher grant amount would be understandable.

You can read some of the communities current thoughts on this topic here, a proposal that passed overwhelmingly: [DAO:qaudhgm] Enable .vrm support for Decentraland models

2 Likes

We ideally need open source tools and frameworks to be built over applications. I don’t think we can realistically expect Decentraland to rebuild every tool it needs when open source and interoperable versions are available.

We will consume a large amount of tech debt if we focus on building out applications over generic and open source frameworks, we need to power creators in Decentraland to build and explore the metaverse, not a locked down ecosystem where experiences are token-gated - at least I think that’s the vision of the ‘free and open metaverse’ over of something like Meta.

“The Avatar pipeline” is a known unsolved problem in the metaverse, no application let’s you build a VTuber avatar from a marketplace of open sourced and interoperable wearables/assets that the community or anyone can create and contribute to (and has it’s own economy). The closest most will get is shelling out 2-5k USD for a custom built one on Fiver.

This is however, a problem Decentraland is uniquely positioned to solve, as we have 90% of the pipeline created and verified by the blockchain already - Our avatar wearables are even based on the same skeleton as the open source default VR avatar formats (.vrm), we are only missing a way to map the facial expressions and export the file extension.

I implore you to consider re-proposing this as building out an npm package for the DCL repo that developers can use to build VR compatible exports of custom wearables by providing shape keys/mapping, it will open up interop for the entire community and let people experience the metaverse as it should be.

This library could then be added to the DCL-core wearables pipeline, so creators can upload facial expressions alongside their .glb file for 100% end to end UI-automated interoperability with applications like VRChat - I truly believe this is the future of DCL wearables and avatar creation, but in my limited skillset it’s taking myself a long time to get there - I hope you consider this request.

In my research so far I believe a proof of concept would follow these steps:

  • Download current avatar .glbs from current DCL endpoint (verified by metamask signature - or just use a locally hardcoded return value for demo).
  • Provide a JSON interface for mapping shape key files to wearable slots (eyes/mouth wearable slots doing basic movement would be enough)
  • Export the configuration as a .vrm file

The full pipeline would add more in-depth shape keys and a strategy to merge the shape key implementation and .vrm export with the current DCL wearables pipeline, a feature that I believe would increase the metaverse value prop by enabling DCL VR avatars for both external use and for when we have our own VR client.

Apologies for the word dump, I hope this was helpful.

2 Likes

First of all happy holidays to all, sorry for not answering sooner, I was with the family these last two days.

@HPrivakos I don’t understand your first argument, basically without something already existing you can’t tokenize anything, in fact that’s the purpose, on the other hand in the documentation we wrote other points in favor to be able to use inside the app which are:

  • Being able to tokenize vrm avatars: this could be a very important point for the community of creators and also for the dao to increase transactions.

  • Virtual camera/calls inside the app: Most of the apps in the current market do not generate a virtual camera in a simple way or they need a software like OBS to do it, we are developing the simplest way to do it with the best possible result.

I agree with the usefulness of converting mesh from wearables to vrm, we can include that development within the proposal.

@Crypt_Sannin Hi, thanks for commenting, that’s right, we decided to separate the proposal into 3 at Esteban’s request, (we had one) but the vtubing app is the mother project, the total budget includes all aspects of development, despite not having received the grant we have already started working on the app, this is the current status, we need to polish and work on many aspects, in a few hours we will share more about the status of the other 2 proposals: https://twitter.com/TonyDarko_eth/status/1607550656079958016?s=20&t=KzEYJcVPyodUV93KEEb6dA

@Morph Hello, although few you are one of the people I have seen in the community with an interest in vtubing, this app is for that, of course vseeface allows you a facetracking but only that, we developed a hand tracking as well, yo can see our development status above.

We have decided to add in the development a tool that allows to export directly from blender vrm files, now talking about your second comment, the base rigg of dcl, vrm and even mixamo are practically identical, but they have different names, the fbx of the dcl documentation has the eyes and mouth in png and not mesh, it is necessary to add a bone ROOT to the dcl rigg to not have compatibility problems and that requires further development, for that reason it is better the first idea of having a native converter when developing wearables, as you said so we are going to work on it

1 Like

It does. And with a webcam, no need for DSLR.

1 Like

Will everything be fully open source or do you plan to keep some parts proprietary?
And will it support solely DCL avatars or all VRMs?

You are wrong, let me explain how Webcam Motion Capture works:

  • Webcam Motion Capture sends signals under the VMC protocol to Vseeface, which means you need two applications. In addition, you need OBS to be able to have the virtual camera, which means you need three software programs to perform the same function. Also, Webcam Motion Capture is not free; you can try it for free, but you have to pay to use the full tools of the app. And what are you going to say? That it is cheaper to pay for the app than the grant? What sense does it make to pay for something if you still need more steps and it will be more complicated to use than our app, which will be completely FREE and also have all of these same attributes in a single executable? If you think about it, this will bring more users to decentraland.

Also we think is better to use the OSC protocol over the VMC protocol could be because OSC is more widely supported and has a longer history of use, OSC is a protocol for communication between computers and other devices that was developed in the late 1990s and has since become a standard in the industry. It is supported by a wide range of software and hardware platforms, making it easy to integrate into a variety of systems (for example decentraland, check this repository GitHub - decentraland-scenes/osc-relay: Route OSC messages to Decentraland scenes, via Colyseus) In contrast, the VMC protocol is a relatively newer and less widely used protocol. It may be more difficult to find software or hardware that supports it, which could limit its usefulness and flexibility, OSC has a more robust feature set and is more flexible, allowing for the transmission of a wider range of data types and the use of custom data formats.
OSC may be the more reliable and flexible choice for communication between devices and systems, making it a better option than the VMC protocol.

1 Like

We said it from the beginning, everything will be open source, I even left the blend file of my first tests in eevee in real time for the community, as well as in the other proposals we will be transparent with the whole process.
On the other hand it will be compatible with any vrm that can be added within the nfts of the marketplace, again, we put that in the documentation as an extra point.

I couldn’t find any mention of open source in this proposal, sorry.
About the blend file, it can be generated in a few minutes with Blender and an open source plugin, not a huge achievement or proof of anything.

I find it extremely counter productive to limit on purpose the functionalities of an application just to make it “web3”.
There is absolutely no good reason to voluntarily put a limit to which VRM can be loaded and to use a NFT for that.
But if it’s open source, I don’t really care as someone will be able to fork it and remove that limitation.

1 Like