by 0x8b257b97c0e07e527b073b6513ba8ea659279b61 (Morph)
Should the problem/opportunity outlined be refined and taken to the next level?
It is going to be increasingly difficult to retain feature parity, and user parity, across three different verticals (Browser/Desktop/VR), especially with dozens of VR devices and multiple interfaces.
We need a solution that brings these clients closer together, instead of silo-ing them further apart. What’s more, we need a solution that leverages the current client and technology we have, instead of creating more clients for the foundation/DAO to maintain.
The browser is one of the greatest interoperable technologies ever developed, a majority of which run on the same open source technology (Chromium) and is compatible with nearly every device in the world.
WebXR is an open interface for VR devices to connect to 3D content in the browser. All VR devices, including the latest Apple headset, have webXR support (including Quest). By simply opening a URL/browser window in VR, users can immediately connect to VR platforms, all with the modern security and functionality of browsers.
You can see this technology for yourself, already in use at Hyperfy, another open metaverse with similar ideals to Decentraland.
You can read more about the WebXR spec here:
The original premise of Decentraland was a VR platform and open source SDK, to enable the future open metaverse.
I believe this is a large driving force behind MANAs continued valuation, and much of the user/developer interest in the space.
Unfortunately, it seems this has been removed from the current foundation roadmap.
You may be quick to dismiss the browser, as an older, outdated technology when compared to the higher graphics and FPS available on the desktop client - however, webGPU is a new technology that can deploy graphics card resources to render in-browser, this technology alongside our metaverse 3D engine already available in-browser would be incredibly powerful, secure, and easier to jump into than a desktop download.
We should begin to test webXR and integrations for when webGPU is ready, we may find webXR even provides a decent VR experience in-browser with higher end devices right now.
To begin, we do not need full body tracking like VRChat, we simply need a user to be able to walk around as their avatar, see other users in browser/desktop mode, interact with world objects, and use voice chat.
A proof of concept webXR integration would enable this, without requiring additional backend changes to current character interfaces. It also seems that .VRM was discussed as being on the roadmap at recent avatar Q&A sessions, which can be leveraged together with this concept.
Long term, full body tracking will be a necessity, however I believe this kind of functionality would be perfectly suited for our distributed node system thanks to local latency options, and could perhaps even be a paid premium feature used to incentivize more nodes.