You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To run with the external data, you would need to create a data folder and a config file for the scene.
Data folder (e.g., data/nerf/drums/) consists of splits: train, validation, test.
Each split contains:
intrinsics/*.txt - camera intrinsics as 4x4 matrices per view, with optionally a 5th line with distortion coefficients ("k1, k2, 0, 0", as for example in data/nextnextgen/plant/train/intrinsics/r_00977.txt),
pose/*.txt - camera extrinsics as 4x4 matrices per view. The views are normalized so that all cameras are withing the unit sphere bound. Nothing will be learned outside of the sphere either. So with the outdoor scenes, you would need to downscale the cameras so that the scene fits too. You can find more about the conventions we use here: https://github.com/Kai-46/nerfplusplus#data
events/ contains exactly one .npz file with events. The file contains arrays 'x', 'y', 't', 'p', where 'x', 'y' are integers from 0 to the corresponding dimension, 't' is a floating point timestamp from 0 to the end of the stream, and 'p' is the polarity, which is either -1 or 1. Note that events are only needed for the training split,
rgb/*.png or *.jpg - reference RGB per each view. They are NOT used in training. But since is needed to run the code, you can provide blank or any other placeholder images as long as they match the resolution of the camera.
Therefore to use your own data:
You would need to convert camera poses and intrinsics to the format that we use.
Then, you would need to convert the events from rosbags to the .npz format.
Finally, you would need to create the corresponding training config, e.g., by copying from other scenes. There you would need to
change the data path/scene name to match the data,
change is_colored to False if the data is taken with a grayscale camera,
change is_cycled to False since the camera path is not cycled (last view=first view).
How to use other event camera datasets (like https://daniilidis-group.github.io/mvsec/ and https://dsec.ifi.uzh.ch/) as model input?
The text was updated successfully, but these errors were encountered: