As Project Kepler is a closed-world metaverse, it allows us to enable a new level of mass-scale metaverse experience. Closed-World means that each property in Project Kepler acts as a separate individual entity with no inter-spatial link to other properties. Therefore each creator's property has its own Metadata storage, rendering capabilities, and Spatial audio capabilities.
To enable the hosting of thousands of users within each Project Kepler Property, we will be utilizing the M^2 Morpheus technology created by Improbable. Using optimized traffic networking and new rendering technologies, Morpheus is years ahead of its competitors - breaking records with in-room concurrency.
Morpheus has demonstrated its capability when hosting thousands of users for concerts and events such as Scavengers ScavLab and the AleXa Concert.
The M^2 technology allows rendering to occur at 30 frames per second, enabling all Project Kepler users to make the most of the Unreal Engine 5 graphical capability in full HD across the web in both desktop and mobile devices.
Between users and members inside Project Kepler, communication comes in 2 forms:
- Speech through audio
- Text through in-platform chat
Both methods of communication are available on WEB and VR.
WEB users can use inbuilt microphones or headsets to communicate with users that are inside the same property, and within a realistic distance of each other, having the feel of real environment audio. In addition, WEB users can use their keyboards to engage with the in-game chat, allowing users to interact across properties and in groups.
Similarly, VR users can use their microphones to communicate with other users inside the same property, and within a realistic distance of each other. When VR users wish to use the in-game chat, they are able to do so with speech recognition - allowing their speech to be transformed into text that can be sent across the Kepler Metaverse.
Each property supports thousands of input audio sources as well as thousands of rendered audio streams. The Morpheus technology allows us to have shared audio between all users within a single property - allowing users to communicate and take full advantage of their surroundings. For example: if a user is walking inside a property, other users will accurately hear their footsteps.
The use of spatial Audio allows for air absorption, distance attenuation, occlusion, head muffing, and voice projection, thousands of audio from users to be as accurate as being in real life. Thus, a user will hear the users within a realistic distance from them; and if a concert is being played, users closer to the stage will have louder audio than users at the back of the property.
As the Project Kepler Metaverse is available on both WEB and VR - we further take advantage of the M^2 Morpheus technology - whose game rendering service allows users access their property within seconds from VR, WEB and mobile devices.
In order to be interoperable across device types, assets and property blueprints have to be available for all device types. This is enabled by utilizing Weka and Sickit AI Learn Machine softwares which are integrated to ensure automated and quick asset duplication. This means when an asset is created and saved inside the Seamless Modelling Software (SDK), the AI asset duplication software automatically creates 2 versions of each asset, one for VR and the other for WEB compatibility.
To store all assets created by users, property blueprints, marketplace purchases, and other Kepler-compatible assets- the Metadata storage technology created by ImmutableX will be used, allowing all assets to be stored on-chain.
Members access their Kepler dashboard through the Project Kepler site by log-in using the wallet which in turn stores their Kepler Membership NFT. Whilst users are creating assets on our modeling software or purchasing them from the Kepler Marketplace, these assets are stored as NFTs within their wallet - allowing all members to seamlessly access and add these assets from their wallets into their property.