by 0xb350fb0ee5da6485a07b58f1f204d248863e8e04 (Sergei#8e04)
Should the following Tier 4: up to $60,000 USD, 6 months vesting (1 month cliff) grant in the Platform Contributor category be approved?
We would like to introduce the possibility of Conversation AI to Decentraland and allow everyone to create their own voice bots in the metaverse.
Conversational AI has developed in leaps and bounds in recent years, allowing voice-based conversations between humans and machines that feel natural and human.
This tech has been widely accepted by consumers in the years following the launch of Siri in 2011 and nowadays people are interacting with voice assistants in their homes on a daily basis.
Voice has a great chance of becoming the predominant interaction modality in Decentraland. This will require a voice artificial intelligence system that understands natural language and replies to users through speech.
Example Use Cases
Conversational AI can be used to power voice assistants or chatbots in Decentraland in such cases as:
Automate FAQs and users’ onboarding process with a voice-powered assistant (to help visitors at events, games, shops, etc.)
Create AI-driven in-game characters
Automate sales interactions (take payments, check refund statuses, balances and more)
Power smart real estate agents (deliver information on parcel location, property details etc. )
Create a Siri-like voice assistant that helps users navigate around Decentraland
Many other things, just like in the real world
Try it here (loads 3-5 minutes).
Instructions to use the prototype:
- Allow your browser to use your microphone when requested
- Approach the NPC (the dog) - if you are close enough a pop-up with “Hello” will appear. This will launch the voice bot
- Wait for the voice greeting from the NPC
- You may ask the NPC the following questions:
- Who are you?
- What is your name?
- How are you?
- How many players are online?
- Which districts are presented in Decentraland?
- Where I can buy some art or NFTs?
- Any great events (happening)?
- Where I can go? What is interesting right now?
- When finished you can say “goodbye” to the NPC
We are going to develop a package that will:
- Grab voice from the DCL scene
- Pass the voice to Middleware
- Connect Middleware to a number of existing platforms with speech-to-text engines and natural language understanding engine capabilities, where the voice would be processed and return a result and commands back to Decentraland
- According to the command received from the platform by Decentraland, the voice bot will continue communication with the user and proceed with all necessary transactions if needed
We will create a new Voice Bot Module for Decentraland Kernel and provide access from the ECS directly or via Content Server. A 3rd-party server could be set up, for example, in the NPC object definition or any other object in the scene. We are planning to extend an Entity class to do that.
We are also going to create an RTC Server for handling user requests with integration modules to external services: Google Dialogflow, Microsoft Azure Bot, Amazon Lex, and Voctiv. Modules for Dialogflow, Azure Bot, and Amazon Lex will be published on Github, so users can build a voice bot with such services by themselves.
RTC Server and connector modules will be implemented in Python. The modified Decentraland diagram will look like the following (blocks in green to be delivered by the team):
- Deliver audio to the user and get back to the external server
- Provide interactive actions for Entities on a scene such as
- Wave their hand
- Create server-side events
- Implement event processing on a scene
- Provide service for server-side bot implementation
- Create connectors for 3rd party services: Dialogflow, Azure Bot, Amazon Lex
- Pull request with Kernel and ECS modifications
- RTC Server on Github
- Connector modules to 3rd party services on Github
The project team consists of three experienced developers: Ilya Ryzhov (Frontend), Ilya Spirin (Backend), and Andrey Sidorov (Frontend).
The team members have worked together for 3 years and have implemented enterprise-scale Conversational AI projects for big telecom customers, call center automation for banks, and customer support automation for major enterprises and medium companies.
Technical stack for the project:
Python, AIO RTC, AIO HTTP, Fast API, FFMPEG, gRPC, TypeScript, WEB RTC, WEB Socket, Docker.
Besides that, our team is experienced in the following: TypeScript, JS, Angular, React, Capacitor, Ionic, Cordova, NestJs, Postgres, MongoDB, Rabbit, Docker, Web3, WEB RTC, WEBGL, Webpack, Docker Compose, Nx monorepo, Python, C/C++, SQL/NoSQL databases, VoIP technologies, CI/CD.
- Ilia Ryzhov, Senior Frontend developer with experience in WebRTC applications
- Andrey Sidorov, Frontend developer
- Ilia Spirin, Backend developer with experience in VoIP technologies, Solution Architect at Voctiv
- Build a test scene with objects the user can talk with
- Build scene
- Create an RTC Server prototype to make calls with
- Create Voctiv connector
- Collect feedback from the development community
- Provide interactive actions for NPC
- Frontend processing
- Delivering to frontend; server-side decision making
- Collect feedback from community members
- Provide a service for server-side bot implementation
- RTC Server Github repo
- Provide documentation
- Provide new 3rd party integration modules
- Google Dialogflow
- Microsoft Azure bot
- Amazon Lex
- Provide documentation
Full project duration is 28 weeks with some blocks being developed in parallel:
- Research of architecture / modules to use / update (4 weeks)
- RTC Server implementation (6 weeks)
- Frontend RTC integration (9 weeks)
- Google Dialog Flow, Azure Bot and Amazon Lex connectors implementation (9 weeks)
- Testing (3 weeks)
- Documentation (3 weeks)