FAQ

Frequently Asked Questions

Spatial Video

  • What is spatial video?

    It is an emerging technological advancement or continuation of the regular digital video technology. The main technical difference of spatial or volumetric video is the use of depth data and multiple cameras. This enables to record in three-dimensions exactly as it is in the reality, we live in.

  • How long can my recordings be?

    Technically they can be as large as the permanent storage capacity of your hardware permits, however usually due to processing time that is required after recording, we recommend no longer than 5-minute recordings. If your project enables you to record in smaller chunks of data, then it will be easier to manage your assets.

  • What is the file size of the spatial video?

    With our recommended workflow for .OBJ with texture you can get a single frame to be 1-2MB. If you are wondering about the file size of the original RAW volumetric video recording, it usually ranges in between 1-10GB but can dramatically increase in size depending on the recording time.

Calibration

  • Which calibration marker should I use?

    We have 3 markers that can be found directly in the stereo-calibration feature in SpatialScan3D windows app or can be downloaded from here http://spatialscan3d.com/downloads/. Markers are ready to be used on Letter size (US) or A4 size (EU) sheet of papers or can be customised to suit any size.

  • How fast is the calibration process?

    Recording usually takes ~10-20 seconds per camera pair. Calibration is automatic and usually takes ~20 seconds per camera. If you can get a good calibration recording from the first try, then you should be able to calibrate your volumetric capture setup within 2-5 minutes. Which can become longer if you have more cameras (7-12 Azure Kinects). Calibration has two major steps, recording marker positions and calibrating.

  • How do I calibrate my scene?

    There are a couple of different tools for calibrating your scene, which can all be found in the Calibration tab, but the one that we suggest for everyone to use is Stereo Calibration, where you print out a marker, record a volumetric video by showing that marker to different pairs of cameras and then finally opening the recorded file with the Stereo Calibration function in the Calibration Tab.

  • What do I need for the calibration process?

    To start calibration you’ll need to print a calibration marker and position it on a flat board. Calibration marker can be printed with a home printer. Also make sure you have a good artificial lighting.

  • Why can’t I get a good result from stereo calibration?

    There are many possibilities: 

    • You might not be holding the calibration board still enough.
    • You might not be showing enough different and distinct positions and angles.
    • You might have entered incorrect calibration board parameters.
    • You might have been missing frames during recording.
    • Your recording might be desynchronized.
    • Your cameras might be too far away from each other.
    • Your cameras might be too far away from the center. You can find more detailed description of what to do in each case here spatialscan3d.com/support/faq/calibration/ otherwise you can book appointment spatialscan3d.com/contact-us/ with one of our engineers.

  • How can I fix the calibration?

    Record another calibration video and use that either to create a new calibration from scratch or to refine an existing calibration. Alternatively, you can try to use manual calibration to fix the calibration by hand.

Capture Stage

  • How long does it take to setup a spatial capture stage?

    Due to our fully automated calibration process it takes just a few minutes to setup all your cameras in a single 3D space.

  • How big can my spatial capture stage be?

    We recommend having a square stage of 1×1 meters in size, and approx. 2m in height. This is a standard stage size to obtain the best quality depth and color data. However, you can have a stage of 4×4 meters if the obtained quality is suitable for youproject. If you are not sure we always recommend discussing this during a discovery call, get appointment https://spatialscan3d.com/contact-us/.

  • How to make the stage larger?

    The more cameras you use, the bigger the stage you can technically have. However, at 2–4-meter range the quality will suffer, and going further than that is not recommended.

Processing

  • What does the full process of recording and exporting look like?

    Setup your cameras. Do a calibration recording. Ensure you have a good calibration. Record your actual stage performance. Decide the output quality and data type wish to obtain. Set the computer to export and wait. Check here https://spatialscan3d.com/support/ef-eve/exporting-tutorial/workflows.

  • Can I remove green or blue screen?

    Yes, there’s a feature called Chroma Key that does exactly that, so you can record your stage with a green or blue screen and then remove it later with our built-in tools. Alternatively, if our chrome key feature is not good enough for your use case you can also export the recorded images and post-process them in another software to remove the green screen.

  • Are there any recommended features?

    Very often you’ll want to clean up your scene, by removing the background, for which the cleaning box is immensely helpful. Also, Mask filtering helps remove inaccurate data. Other than that, it depends on what your final spatial video style goal is. You can find more detailed workflows visiting our help center.

Exporting

  • What is the difference between all the possible export types?

    .ply file format supports pointcloud and meshes without textures, while also being able to store data in binary, thus reducing file size. .obj file fomat support pointcloud, meshes and textures, but can’t store data in binary thus increase file size. .gltf or .glb allows you to store the whole spatial video in a single or in two files.

  • How can I make an export faster?

    Reducing the number of points, you’re working always helps, this can be done using cleaning methods in the Scene tab such as the cleaning box or the cleaning brush in the Mask tab. Alternatively, you can use point cloud decimation to reduce the number of points, and thus increase speed. Another speed bump that usually occurs is in Watertight Mesh generation, where you can set the Sample Point Distance to something bigger to increase speed while reducing the number of points that are used for generation. The final place that usually takes a while is the UV Texture Generation, for this you can try to generate a smaller mesh in the mesh generation features, or you can also split your exporting load between multiple computers (multiple licenses will be needed).

  • What do all of these file formats mean?

    .ply hold a single frame as long as it isn’t a textured mesh. Able to have compressed data. .obj holds any single frame without compression. .eve is our custom file format for holding the recorded volumetric video. .cr is the project file used to save and load your settings. .clb is a calibration file, which allows you to store and load the calibration for the same cameras. .gltf+.bin or .gltf are different versions of a volumetric video in its point cloud or mesh form – this is most often supported in other programs.

  • How can I export to .gltf or .glb formats?

    You can use a feature from the Convert tab after you have finished export a sequence of .obj files which contain meshes with textures.

  • Why do I get an empty export?

    This is most often the case when one of your features ended up destroying the mesh. Such features usually are decimation features or mesh cleanup. You can check this by disabling features and rendering to see at which stage your frame is destroyed. If this is not the case, then it’s a bug and you should inform us about it.

Camera Sensors

  • How many spatial cameras should I use?

    Highly depends on what your final goal is. Usually, we recommend the usage of 4-7 Azure Kinect spatial cameras for a good recording from all directions. However, you can get even better spatial capture results with more cameras all the way up to 10 or 12. You can also go lower, with as few as two cameras it is technically possible to cover the scene by recording from opposite directions.

  • How do I connect more spatial cameras?

    There are two ways to record with more spatial cameras. First way is to get additional USB PCIE cards to have more USB ports, but keep in mind that Azure Kinects are extremely specific about which USB host controllers they support. The other way is to use more than one computer, as the Network feature allows you to use multiple computers on the same network to record at the same time.

  • Why are the colours between my spatial cameras different?

    This might be because each of your camera has automatic gain control, which can be turned off in Record tab > Color controls. Alternatively, it might be that your scene is not evenly lit.

  • Why are some of my spatial cameras not shown on the camera list?

    There are multiple reasons. To ensure that a spatial camera is shown you can try doing these steps: unplug all the cables from the spatial camera, wait for a couple of seconds and plug them all back in. Ensure that no other program is using that spatial camera (like zoom). After all this, refresh the camera list. If your spatial camera is still not shown, try launching the Azure Kinect viewer, otherwise known as k4aviewer and check if that program is able to find your spatial camera. If even the viewer is not finding your spatial camera, then it might be faulty.

  • What spatial camera should I use?

    Depends on how many spatial cameras you have and what it is that you’re trying to record. The basic idea is to try to space your spatial cameras evenly, while placing them usually at ~1.5-meter distance from the center of your stage. If you have a lot of spatial cameras, consider putting one spatial camera above the scene looking down. Also, alternate height if you have enough spatial cameras to do so.

  • Why are my spatial cameras missing frames?

    Your cameras might be missing frames because of incorrect USB host controllers (as stated by microsoft, Windows, Intel, Texas Instruments (TI), and Renesas are the only host controllers that are supported). Another alternative is that you might have insufficient hardware (usually CPU), or your disk is currently being used by another program. We’ve also noticed that on unsupported USB host controllers you can get more stable results if you have fewer of the USB ports used (for example only one USB port used in a hub that has 4 USB ports).

  • Why are my spatial cameras glitching?

    This is usually the fault of USB host controller incompatibility. Azure Kinect only work with a couple of different supported USB host controllers, and with everything else, it either works, works partially, or doesn’t work. We have noticed so far that with some USB host controllers, the cameras are good if you don’t connect too many cameras/devices to the same USB card even if there are extra ports there.

Troubleshooting

  • I found a bug?

    If you found a bug, it would be great if you let us know about it, so that we could fix it ASAP. You can find ways to contact us here: spatialscan3d.com/contact-us/.

    It’s best if you can provide us this information about your encountered bug: 

    • Program version, 
    • Description of expected behavior, 
    • Description of encountered behavior, 
    • Steps to take to reproduce the bug, 
    • Program logs (they can be found by clicking [Help]->[Open Logs path] in the program).
  • Why did the program crash/freeze?

    We are actively developing this program, so bugs and program crashes can still happen for assorted reasons. To help fix any issue that you encountered it would be great if you were to contact us (https://spatialscan3d.com/contact-us) describing what you were doing that led to the program crash as well as sending us your program log files. It can be found by clicking [Help]->[Open Logs path] in SpatialScan3D top menu or going to the path C:Users[YOUR USER]AppDataLocalLowDjinn_TechnologiesSpatialScan3D and finding the files called Player.log and Player-prev.log.

  • What is the recommended spec for a spatial workstation with 4x Kinect Spatial Cameras?
    • CPU: Intel® Core™ i9-10920X @3.5GHz (cores:12/ threads:24);
    • RAM: 64GB Quad Channel (4x 16GB); 
    • GPU: Minimum: NVIDIA RTX A2000 (Recommended: NVIDIA RTX A4000); 
    • 4 Port USB3.0 PCIe Cards with 4 Dedicated 5Gbps Channels (4 usb host controllers).

3rd Party Support

  • Which third-party software can you use with the spatial videos?

    Any program that supports .gltf or .glb files should work. We also have our own custom plugins for Unity and Unreal.

  • Can I use spatial videos in other creative applications?

    You can easily integrate your spatial videos into Unity, Unreal Engine, TouchDesigner, Notch and other major programs.

  • How do I use the spatial recording in Unity?

    You must export your video to either and .obj sequence or a .ply sequence and then using our Unity plugin spatialscan3d.com/downloads/, import those frames into your project.

  • How do I use the spatial recording in Unreal?

    You must export your video to either and .obj sequence or a .ply sequence and then using our Unreal plugin spatialscan3d.com/downloads/, import those frames into your project.