Key Technology Features
Instant 3D from Datatechnology feature
Spatial Partitioningtechnology feature
Using Cameras and Videotechnology feature
Massive Point Cloudstechnology feature
Surfaces from Pointstechnology feature
Instant 3D from Native Data
At the heart of our technology is the desire to use any geospatial data, in whatever format it is in. No conversions. No databases to build. No waiting for lengthy import processes.
Just throw in any imagery, elevation data, 3D models, terrains, GIS data, point clouds, or street-level photographs, and you will have a high-fidelity fully interactive 3D scene in seconds.
Update the scene with new data immediately, even if it covers a large area over many different 3D objects or point clouds.
Where Traditional Processes Fail
Existing products use a technique called a scene graph to organize, visualize, and analyze 3D scenes. This technique divides the scene into different physical objects and makes it easy to traverse the scene in real-time.
Source data, however, does not necessarily easily map to physical objects. A single image usually has lots of objects in it!
This disconnect is why it is so hard to build high-fidelity 3D scenes. It is also why it is usually impossible to use live data or video.
New Approach – From the Ground Up
Spatial Cognition uses a revolutionary (and proprietary) technology we call “Metadata-based Spatial Partitioning”.
Instead of dividing the scene into objects, our technique divides the environment into areas defined by different data elements, which may or may not represent individual objects.
By leveraging the massively parallel capabilities of a new generation of Graphics Processing Units (GPUs), this approach allows us to use terabytes of native source data in real-time on consumer laptops, without sacrificing any capabilities of traditional scene graph programs.
Using Camera Imagery and Video
One big advantage: we can efficiently use a lot of perspective imagery and video from cameras!
Whether stationary, hand-held, or mounted on vehicles, both recorded and live imagery is automatically used over any geometry it overlaps, whether they are models, terrain, or even point clouds. Thousands of perspective images can be added, with hundreds visible at any given point in time.
Our technology even performs real-time lens distortion correction (e.g. for fish-eye lenses) and can automatically figure out what frames to save from a moving video camera in order to “paint” the scene with updated imagery.
Massive Point Clouds
LIDAR changes everything. New sensors and mapping systems allow us to rapidly capture very high-detail in both outdoor and indoor areas. Until now, however, using massive amounts of point data was only possible in limited, custom applications.
Our spatial partitioning approach lets us instantly use terabytes of data from these types of sensors for amazing fidelity in both visualization and analysis.
Embracing any sensor system’s data format, whether binary or text, our products allow users to visualize billions of points integrated into the 3D scene, automatically coloring them with any imagery that is available, including live video.
Automatic Surfaces from Point Data
Our unique spatial partitioning approach can go even further. Whether from massive point cloud files or streaming from a live sensor, our products automatically detect and reconstruct solid surfaces from point data in real-time.
Automatically construct building exteriors from sensors mounted on aerial or ground vehicles. Build high-definition 3D interiors as small robots explore every room, floor, and hallway. Visualize underground structures as people with backpack mapping systems traverse irregular tunnel systems. All while preserving the fine detail of objects in the scene.
The system can even detect moving objects and remove them from the view.
Visualization is just one way to analyze geospatial data in 3D. Our core technology is built to be an efficient, real-time 3D analysis engine.
Hyperspectral imagery, material maps, and other non-visual data is ingested like other types of data. Because we use a true earth model with resolution in picometers and have full access to original source data in real-time, we accurately sample data to its highest resolution without errors or distortions in projection.
Since geometry and imagery are combined in real-time, we can perform analysis functions on native imagery, such as mensuration, without intermediate steps like orthorectification. 3D measurement is also accurate to the level of the highest resolution data, and the system can calculate the error based on the data available.
Our team is constantly adding new analysis functionality to our core technology and future products will include an SDK for third-party analysis plugins. Please let us know if there is something you would like to see in our future offerings.