Contact us
We would love to hear from you.
Drop us a line at info[at]ydrive.ai and we will be in touch shortly.
Photo Realistic / Accurate / Scalable
Our neurally rendered digital twins like the one rendered in the video above are photo realistic, accurate to centimeter, reproductions of the world. What would typically take a combination of LIDAR, high end cameras and lots of labor is now possible from just a few images shot on your phone. Our service converts these few images into a true 3D environment with geometric detail, semantic awareness and cinematic neural rendering
We adopt a lidar-less approach and rely only on camera imagery. The image centric approach avoids the need for fleets, trained capture crews and extensive post processing of data. This allows data to be crowd sourced anywhere in the world. The fully automated AI pipeline takes images directly from the phone and can rapidly scale to parallel process thousands of kilometers of imagery simultaneously
Ydrive's digital twins contain sufficient visual, semantic and metadata detail to be useful across a range of applications. Content creators can bring the real world into the Metaverse in minutes or bring entire cities into a virtual production studio. Developers can build a street racing game anywhere in the world in a fraction of the time, create dashboards to track asset progress in construction or train AI on a un-simulated real world replica. Consumers can create just in time disaster management reconstructions, plan better journeys virtually and use lane level navigation in true 3D to name a few. This is just the start !
Our twin platform has two main components - a semantic reconstruction pipeline which creates the digital twins and the cloud renderer which renders on demand. The platform is optimized to accommodate lower quality imagery allowing it to accept imagery from over 1.5 billion phones. It also accepts images in a suitable format from other image sources - DSLRs, Drones, Automobiles etc. While we use a fully managed model for most customers, we can also directly integrate our pipeline for large scale customers, either as a hybrid cloud or on-premise.
Our digital twins make excellent 3D maps, having both geocoding and semantic metadata. Semantics and metadata are automatically extracted and are instantly available for both AI training and navigation scenarios. Additional semantics can be added to the pipeline if desired. Ydrive's twins also make excellent last mile maps from the location on the street all the way to the doorstep.
Our patented localization engine which uses both semantic and visual domains for accurate 10cm localization using just a camera.
Our first digital twins on demand service, powered by the Ydrive pipeline, currently in limited beta. Visit the citysynth.ai microsite to learn more. With direct integration into Unreal Engine, Citysynth is an end to end digital twin creation service for developers and content creators. The twins are available either as self-hosted assets or as a managed asset that can be accessed on demand via our cloud service.
Made with
Website Design Program